You are on page 1of 229

# Westfalische Wilhelms-Universitat Munster

Institut fur
Mathematische Logik und Grundlagenforschung

An Introduction
to
Mathematical Logic
Lecture given
by
Wolfram Pohlers
worked out and
supplemented
by
Thomas Gla

## Typewritten by Martina Pfeifer

An Introduction
to
Mathematical Logic

Wolfram Pohlers
Thomas Gla
Institut fur
Mathematische Logik und Grundlagenforschung
Westfalische Wilhelms-Universitat Munster
Einsteinstrae 62
D 48149 Munster

Typeset AMS-LaTEX
Preface

This text is based on my personal notes for the introductory courses in Mathematical
Logic I gave at the university of Munster in the years 1988 through 1991. They have
been worked out and supplemented by Thomas Gla to whom I express my sincere
thanks for a well done job.
Our courses have been planned for freshmen in Mathematical Logic with a certain
background in Mathematics. Though self contained in principle, this text will therefore
sometimes appeal to the mathematical experience of the reader.
According to the aim of the lectures, to give the student a rm basis for further
studies, we tried to cover the central parts of Mathematical Logic.

The text starts with a chapter treating rst order logic. The examples for application of
Gentzen's Hauptsatz in this section give a faint avour how to apply proof theoretical
methods in Mathematical Logic.
Fundamentals of model theory are treated in the second chapter, fundamentals of
recursion theory in chapter 3. We close with an outline of other formulations of ( rst
order and non rst order) logics.
Nearly nothing, however, is said about set theory. This is usually taught in an
extra course. Thus there is an appendix in which we develop the small part of the
theory of ordinal and cardinal numbers needed for these notes on the basis of a naive
set theory.
One of the highlights of this text are Godel's incompleteness theorems. The true
reason for these theorems is the possibility to code the language of number theory by
natural numbers. Only a few conditions have to be satis ed by this coding. Since we
believe that a development of such a coding in all its awkward details could mystify
the simple basic idea of Godel's proof, we just required the existence of a suitable
arithmetisation and postponed the details of its development to the appendix.
iv Preface
I want to express my warmest thanks to all persons who helped in nishing this text.
Besides Thomas Gla { who did the work of a co-author { I want to mention Andreas
Schluter in the rst place. He did not only most of the exercises but also most of
the proof-reading. Many improvements are due to him. My thanks go also to all our
students who detected and reported errors in a rst version and gave us many helpful
critical remarks. We do not regard these notes as nished. Therefore we are still open
for suggestions and criticism and will appreciate all reports about errors, both typing
errors and possible more serious errors.
Last but not least I want to thank our secretary Martina Pfeifer who TEXed the
main bulk of this paper.

## Munster, October 1992 Wolfram Pohlers

Contents

Historical Remarks : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 1
Notational Conventions : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 3
1 Pure Logic 5
Heuristical Preliminaries : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 5
1.1 First Order Languages : : : : : : : : : : : : : : : : : : : : : : : : : : : : 6
1.2 Truth Functions : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 10
1.3 Semantics for First Order Logic : : : : : : : : : : : : : : : : : : : : : : : 17
1.4 Propositional Properties of First Order Logic : : : : : : : : : : : : : : : 27
1.5 The Compactness Theorem for First Order Logic : : : : : : : : : : : : : 35
1.6 Logical Consequence : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 45
1.7 A Calculus for Logical Reasoning : : : : : : : : : : : : : : : : : : : : : : 48
1.8 A Cut Free Calculus for First Order Logic : : : : : : : : : : : : : : : : : 53
1.9 Applications of Gentzen's Hauptsatz : : : : : : : : : : : : : : : : : : : : 66
1.10 First Order Logic with Identity : : : : : : : : : : : : : : : : : : : : : : : 82
1.11 A Tait-Calculus for First Order Logic with Identity : : : : : : : : : : : : 85
2 Fundamentals of Model Theory 93
2.1 Conservative Extensions and Extensions by De nitions : : : : : : : : : : 93
2.2 Completeness and Categoricity : : : : : : : : : : : : : : : : : : : : : : : 99
2.3 Elementary Classes and Omitting Types : : : : : : : : : : : : : : : : : : 109
3 Fundamentals of the Theory of Decidability 115
3.1 Primitive Recursive Functions : : : : : : : : : : : : : : : : : : : : : : : : 116
3.2 Primitive Recursive Coding : : : : : : : : : : : : : : : : : : : : : : : : : 124
3.3 Partial Recursive Functions and the Normal Form Theorem : : : : : : : 130
3.4 Universal Functions and the Recursion Theorem : : : : : : : : : : : : : 134
3.5 Recursive, Semi-recursive and Recursively Enumerable Relations : : : : 136
3.6 Rice's Theorem : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 142
3.7 Random Access Machines : : : : : : : : : : : : : : : : : : : : : : : : : : 145
3.8 Undecidability of First Order Logic : : : : : : : : : : : : : : : : : : : : : 148
4 Axiom Systems for the Natural Numbers 153
4.1 Peano Arithmetic : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 153
4.2 Godel's Theorems : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 158
v
vi Contents
5 Other Logics 165
5.1 Many-Sorted Logic : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 165
5.2 !-Logic : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 172
5.3 Higher Order Logic : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 175
Appendix 179
A.1 The Arithmetisation of NT : : : : : : : : : : : : : : : : : : : : : : : : : 179
A.2 Naive Theory of the Ordinals : : : : : : : : : : : : : : : : : : : : : : : : 188
A.3 Cardinal Numbers : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 198
Bibliography 203
Historical Texts : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 203
Original Articles : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 203
Text Books : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 206
Glossary 209
Index 215
Historical Remarks

## Nowadays mathematics is an extremely heterogeneous science. If we try to nd a

generic term for mathematical activities we encounter the astonishing diculty of
such an enterprise. In former times mathematics has been described as the `science of
magnitudes'. Nowadays this is no longer true. We regard a science as a mathematical
science mainly not because of its contents but rather because of its methods.
The characteristic of a mathematical theory is that it proves its claims in an exact
way, i.e. it derives its results from generally accepted basic assumptions without using
further information such as empirical experiments etc. Then the next question to be
asked is: \What does it mean to `derive something from something?'", i.e. what is a
mathematical proof?
Still in the last century a proof was more or less a matter of intuition. A sentence
was regarded to be a theorem when it was accepted by the mathematical community.
We know `proofs' of theorems which are considered to be false in these days (although
the theorem is true which is a point for the intuition of the researchers involved).
However, it seems to have been clear at all times that `logical' reasoning should be
ruled by laws and the e orts to investigate them reach back to the times of antiquity.
The oldest known `logical system' is Aristotle's [ 384 B.C., y322 B.C.] Syllogistic.
We will not describe Syllogistic here. All we want to say is that it is by far too weak to
describe mathematical reasoning. About the same time there was also some research
on logical reasoning by the Megarians and Stoics, which in some sense was much more
modern than that of Aristotle.
The in uence of Aristotle's work was tremendous. It ruled the Middle Ages. The
fact that the Roman Church had taken up { with some adjustments { Aristotle's
philosophy created an over-great respect for Aristotle's work. This together with
other traditions paralysed logical research and restricted it mainly to work-outs of
Aristotle's systems of Syllogistic.
In that time a remarkable book with new ideas was Ars Magna (1270) by Raimun-
dus Lullus [ 1235, y1315] in which he suggested that all knowledge in the sciences
is obtained from a number of root ideas. The joining together of the root ideas is the
`ars magna' (the great art). Lullus himself did not really develop a theory but his
ideas still in uenced Leibniz' work. One of the lasting challenges of Lullus' ideas
was the development of a general language for a general science.
The rst one to have the idea of developing a general language in a mathematical
way was Rene Descartes [1596, y1650]. But because of the great in uence of the
Roman Church he did not publish his ideas.
2 Historical Remarks
Such attempts were made by Gottfried Wilhelm Leibniz [1646, y1716]. In his
De arte combinatoria (1666) he suggested a `mathematics of ideas'. Leibniz regarded
mathematics as the key-science which should be able to decide all questions `in so far
as such decisions are possible by reasoning from given facts'. So he tried to develop
a general algorithm which could decide any question (obeying the just mentioned
restrictions). Using that algorithm he wanted to decide whether God exists. Of course
he failed (he had to, as we know today).
The real start of mathematical logic were George Boole's [ 1815, y1864] books
The Mathematical Analysis of Logic (1847) and The Laws of Thought (1854). Three
decades later, Gottlob Frege [1848, y1925] published his book Begri sschrift, eine
der arithmetischen nachgebildete Formelsprache des reinen Denkens (1879). In the
work of both authors the central point is a formalisation of the notions of `sentence'
and `inference'.
Boole, in uenced by the work of William Hamilton [1780, y1856] and Augus-
tus De Morgan [ 1806, y1878], opted for an algebraic notation while Frege designed
an arti cial language on the model of colloquial language. Because of its complicated
two-dimensional notations Frege's formalisation did not succeed. Boole's concept
of an algebra of logic still had some aws which prevented it from being commonly
accepted. Nowadays, after having taken out the errors in Boole's concept, the notion
of a boolean algebra became central in mathematical logic.
The breakthrough in the development of mathematical logic was the Principia
Mathematica (1910, 1912, 1913) by Alfred North Whitehead [ 1861, y1947] and
Bertrand Russell [ 1872, y1970]. Their notions relied on previous work by Giusep-
pe Peano [1858, y1932]. His book Formulaire de Mathematique (1897) presented a
completely developed formalism for the theory of logic and thus launched what we call
Kurt Godel [ 1906, y1978] and other pioneers of modern mathematical logic used
the `Principia' as their main reference. Mathematical logic investigates mathematical
reasoning by mathematical methods. This self-referential character distinguishes it
from other elds of mathematics and is the reason why logic sometimes is regarded as
a somewhat strange part of mathematics.
The best example of the kind of strangeness we mean are the famous Godel's
incompleteness theorems which show us the limits of formalisations. These theorems
the things we claim here.
Nowadays mathematical logic is divided into four sub elds:
 Recursion Theory
 Set Theory
 Model Theory
 Proof Theory
Having discussed the basics of logic (pure logic) we are going to obtain some connections
to model theory (fundamentals of model theory). After that we develop the basic
Notational Conventions 3
notions of recursion theory (fundamentals of the theory of decidability) and turn to
the fundamentals of proof theory in the fourth chapter. In the last chapter we will
regard some other formulations of logic.

Notational Conventions
 i stands for if and only if.
 ; denotes the empty set.
 IN is the set of natural numbers 0; 1; 2; : : :
 If f : X ! Y is a function we call X the domain of f, i.e. X = dom(f): The
range of f is the set
rg(f) = fy 2 Y : 9x2X(f(x) = y)g:
 If f : X ! Y is a function and Z  X, then f  Z denotes the restriction of f to
Z, i.e. f  Z : Z ! Y
f  Z(x) = f(x) for x 2 Z:
 X Y is the set of functions f : Y ! X:
 Pow(X) denotes the power set of X, i.e. the set of all subsets of X.
 X n Y denotes the set X without Y , i.e. the set
fx 2 X : x 2= Y g:
 X [ Y denotes the union of X and Y, i.e. the set
fx : x 2 X or x 2 Y g:
 X \ Y denotes the intersection of X and Y, i.e. the set
fx : x 2 X and x 2 Y g:
 idX is the identity on X, i.e. dom(idX ) = X and
idX (x) = x for x 2 X:
4 Notational Conventions
Chapter 1
Pure Logic

Heuristical Preliminaries
In the historical remarks we already emphasised that the development of a formal lan-
guage belongs to the main tasks of mathematical logic. To see what will be needed for
a formal language of mathematics we are going to examine examples of mathematical
5 j 15;
i.e. the natural number 5 divides the natural number 15, or
(3 + 4) = (2 + 5):
These propositions tell us facts about natural numbers. The rst one tells us that the
two natural numbers 5 and 15 share the property that one (the rst) divides the other
(the second). Such properties which may be shared by one or more objects (natural
numbers in our example) will be called predicates . The number of objects which may
share a predicate is called the arity of the predicate. The equality of natural numbers,
for instance, is a binary predicate. Whenever we have an n-ary predicate P and n
objects o1; : : : ; on, then (Po1 : : :on ) is a proposition , something which either can be
true or false.
The second example is a bit more complex. In it we do not longer compare two
objects but two things, 3 + 4 and 2 + 5 which are built up from objects by functions .
Such things will be called terms . Terms can be evaluated and the evaluation will
yield an object. Thus objects in predicates may well be replaced by terms and still
represent a proposition. We may even replace objects occurring in terms by terms
and still obtain a term. To get a uniform de nition we could say that every object is
a term and that more complex terms are obtained by applying an n-ary function f
to already formed terms t1 ; : : : ; tn, i.e. by building (ft1 : : :tn ). Once propositions are
formed, we may compose them to more complex ones by using sentential connectives.
In colloquial language we compose ones by connectives such as
`and'
it is raining and I'm taking my umbrella,
6 I. Pure Logic
`or'
it is raining or the sun is shining,
`if : : : then'
if it is raining, then I'm going to use my umbrella,
`not' etc.
We will use the symbols ^; _; !; : to denote these connectives. Of course we will
have to give them a mathematically exact meaning (which of course should be as close
as possible to their meaning in colloquial language, because it is colloquial language
which conserves our long experience in thinking). This will be done in section 1.2.
The use of sentential connectives, however, does not exhaust all the possibilities of
forming more complex propositions. In mathematics we usually deal with structures.
Let's take the example of a group. A group G consists of a non-empty set G of objects,
together with a binary group operation, say , a neutral element, say 1, and the equality
relation. The only propositions which are legal with respect to our hitherto collected
rules are equations between terms and their sentential combinations. But this does
not even allow us to express that 1 is the neutral element of G . In order to do that we
have to say something like:
1  x = x and x  1 = x for all objects x in G:
Here x is a symbol for an arbitrary element of G, i.e. x is a variable . This variable
is quanti ed by saying for all x in G. Thus we also have to introduce the universal
quanti er 8x (or some other name, say x; y; : : :, for the variable).
The universal quanti er alone does not suce to express that G is a group. To
formulate the existence of the inverse object we have to say
8x there exists an object y with x  y = 1:
Thus we need also the dual of the universal quanti er, the existential quanti er 9x.
Altogether this means that in order to describe a group in our formal language, we have
to allow object variables replacing objects in the formation for terms and quanti ers
binding them.
These are all the ingredients for a rst order language. This language is already
quite powerful. E.g. it suces for the formalisation of group axioms, ring axioms etc.
However, one can imagine much more powerful languages. So we might introduce
variables for predicates and quanti ers binding them. This would be called a second
order language. Third or even higher order languages languages can be obtained by
iterating this process, i.e. allowing the quanti cation over predicates on predicates etc.
Now we close our preliminary words and try to put these ideas into mathematical
de nitions.

## 1.1 First Order Languages

In the heuristical preliminaries we spoke about creating a language for mathematics.
Now we are going to put those informal ideas into mathematical de nitions. After
1.1. First Order Languages 7
having de ned precisely the formal expressions and their meaning we will analyse the
expressive power of so-called rst order logic.
The heuristical studies of the previous section give us already a clear picture how
to design a formal language. All we have to do in this section is to translate this
picture into a mathematical de nition. The strategy will be the following. To design
a language rst we have to x its alphabet . Then we need grammars which tell us how
to get regular expressions out of the letters of the alphabet.
In rst order languages the regular expressions will be the terms and the formulas.
De nition 1.1.1. The alphabet of a rst order language consists of
1. countably many object variables, denoted by x; y; z; x0; : : : ;
2. a set C of constant symbols, denoted by c; d; c0; : : : ;
3. a set F of function symbols, denoted by f; g; h; f0; : : : Every function symbol
f 2 F has an arity #f 2 f1; 2; 3; : :: g:
4. a set P of predicate symbols, denoted by P; Q; R; S; P0; : : : Every predicate sym-
bol P 2 P has an arity #P 2 f1; 2; 3; : :: g:
5. the sentential connectives ^ (and), _ (or), : (not), ! (implies) and the quanti-
ers 8 (for all), 9 (there is).
6. parentheses (, ) as auxiliary symbols.
A rst order language is obviously characterised by the sets C (the constant symbols),
the set F (the function symbols) and the set P (predicate symbols). We call the ele-
ments of C [ F [ P the non-logical symbols of a language. All the other symbols are
variables, sentential connectives, quanti ers and auxiliary symbols. They are called
logical symbols. They do not di er in rst order languages. To emphasise that L is a
rst order language depending on the sets C ; F ; and P we often write L = L(C ; F ; P ):
First order languages are denoted by L; L0; : : :
To make this de nition more visible we are going to give an example: think of formal-
ising group theory. We declare a rst order language L(CGT ; FGT ; PGT ) in which we
are able to make all statements concerning (elementary) group theory. There we have
 a constant symbol 1 for the neutral element
 a function symbol  for the group operation and # = 2
 a predicate symbol = for the equality relation on the group. = is binary, too.
Thus we have CGT = f1g; FGT = fg and PGT = f=g: Using this alphabet we are
able to talk about statements concerning a group. But how do we build up (regular)
statements? This will be done in general (for any rst order language) in two steps.
In the rst step we will declare how to use variables, constant and function symbols.
The expressions obtained in this step will be called terms, and in the second step we
will introduce how to use predicate symbols, sentential connectives and quanti ers to
obtain expressions called formulas.
8 I. Pure Logic
De nition 1.1.2. Let L(C ; F ; P ) be a rst order language. We simultaneously list
the rules for term formation and for the computation of the set FV(t) of variables
occurring free in the term t:
1. Every variable x and every constant symbol c 2 C is a term. It is FV(x) = fxg
and FV(c) = ;.
2. If t1 ; : : : ; tn are terms and f 2 F is a function symbol of arity #f = n; then
(ft1 : : :tn) is a term. It is FV(ft1 : : :tn ) = FV(t1 ) [ : : : [ FV(tn ). If f is binary
we usually write (t1 ft2 ) instead of (ft1 t2) and call (t1 ft2 ) the in x notation of
the term (ft1 t2 ).
Terms are denoted by r; s; t; r0; : : :
Because terms depend on the given language we will call them L-terms if we want
to emphasise the language. With the alphabet for group theory we can build up the
following terms
 (x1)
 ((1x)(1(yz)))
which doesn't look like something concerning groups because usually one would like to
use the in x notation in connection with binary function and predicate symbols. Then
the above terms are read as
 (x  1)
 ((1  x)  (1  (y  z)))
and we have the free variables
 FV((x  1)) = fxg
 FV((1  x)  (1  (y  z))) = fx; y; z g:
I.e. FV(t) is the set of variables occurring in the term t:
De nition 1.1.3. Let L(C ; F ; P ) be a rst order language. Simultaneous to the gram-
mar for formulas we introduce the rules for the computation of the sets FV(F) of
variables occurring free and BV(F) of variables occurring bound in the formula F.
1. If t1; : : : ; tn are L-terms and P 2 P is a predicate symbol of arity #P = n;
then (Pt1 : : :tn ) is a formula. It is FV(Pt1 : : :tn) = FV(t1) [ : : : [ FV(tn ) and
BV(Pt1 : : :tn ) = ;. If P is binary we often write (t1 Pt2) instead of (Pt1t2).
2. If F and G are formulas, then so are:
(:F); (F ^ G); (F _ G); (F ! G):
It is FV(:F) = FV(F); BV(:F) = BV(F) and FV(F  G) = FV(F) [ FV(G),
and BV(F  G) = BV(F) [ BV(G) for  2 f^; _; !g:
1.1. First Order Languages 9
3. If F is a formula and x is a variable such that x 2= BV(F), then (8xF ) and (9xF )
are formulas with FV(QxF ) = FV(F) n fxg and BV(QxF ) = BV(F) [ fxg for
Q 2 f8; 9g:
Formulas are denoted by F; G; H; F0; : : :
Thus formulas depend on the language, too. If we want to stress this fact we will call
them L-formulas. If L does not contain any predicate symbol, there are no L-formulas.
So from now on we will assume that we have P = 6 ;.
Using the in x notation again we have obtained formulas of the shape
 (8x((1  x) = x))
 (x = y)
 (8x(1 = y))
In these cases we have the sets
 FV((8x((1  x) = x))) = ;; BV((8x((1  x) = x))) = fxg
 FV((x = y)) = fx; yg; BV((x = y)) = ;
 FV((8x(1 = y))) = fyg; BV((8x(1 = y))) = fxg.
In the third case of De nition 1.1.3 we have a condition on the variable for building
formulas. Thus
(8x(9x(x = x  1)))
is not a formula because x 2 BV((9x(x = x  1))): The grammars in De nitions 1.1.2
and 1.1.3 (and in further de nitions to come) are often called inductive de nitions. An
inductive de nition is given by a set of rules. The least xed point of an inductive de -
nition is the smallest set which is closed under all the rules in the inductive de nition.
A set is inductively de ned if it is the least xed point of an inductive de nition. The
important feature of inductively de ned sets is that we may prove properties of their
elements by induction on its de nition , which means the following principle:
To show that all elements of some least xed point M share a property '
it suces to show that ' is preserved under all the rules in the inductive
de nition of M.
We will use `induction on the de nition' over and over again starting with quite simple
situations. Quite easy examples for `induction on the de nition' are given in the
exercises.
We agree upon the following notations and conventions:
 Formulas which are built according to the rst clause in De nition 1.1.3 are called
atomic .
 A term t with FV(t) = ; is called closed .
10 I. Pure Logic
 A formula F with FV(F) = ; is called a sentence .
Up to now we have described the objects of interest of the rst chapter:
rst order languages.
But in this section we only spoke about the syntax of a rst order language: about its
alphabet and about its regular expressions. In the next two sections we are going to
develop the semantics of rst order languages. Section 1.2 is devoted to give meaning
to the sentential connectives
^; _; :; !
which are only syntactical symbols without any meaning. There we will see that rst
order languages are powerful enough to represent any truth function (a semantical
object, cf. De nition 1.2.1) by some kind of syntactical expression.

Exercises
E 1.1.1. We de ne the set of permitted words (i.e. non-void nite strings) over the
alphabet fM; U; I g by the following inductive de nition. There x; y are words and
concatenated words are denoted by writing them one behind the other.
1. MI is a permitted word.
2. If xI is a permitted word, so is xIU:
3. If Mx is permitted, so is Mxx:
4. If xIIIy is permitted, so is xUy:
5. If xUUy is permitted, so is xy:
Prove the following claims:
a) MUUIU is a permitted word.
b) MU is not permitted.
E 1.1.2. The set M  IN is de ned inductively by:
1. 2 2 M
2. Is n 2 M; so n + 3 2 M:
Prove: It is n 2 M i there is an m 2 IN with n = 3m + 2:
1.2 Truth Functions
Up to now we have only developed the syntax of rst order languages. A term or a
formula is nothing but a well-formed sequence of letters according to the rules of the
respective grammar. Therefore we need to x the meaning of the letters and of the
expressions formed out of the letters. We start by xing the meaning of the sentential
connectives. The purpose of sentential connectives is to connect propositions. A
proposition is something which either can be true or false. Thus sentential connectives
can be regarded as syntactical counterpart of truth functions and we have to develop a
theory of truth functions. We represent the truth value `true' by t, f stands for `false'.
1.2. Truth Functions 11
De nition 1.2.1. An n-ary truth function is a map from ft; f gn into ft; f g.
Now we have to give a precise meaning to the sentential connectives of colloquial
language. That is easy for negation. We de ne the truth function : : ft; f g ! ft; f g
by
: (t) = f and : (f) = t:
It is also easy for ^ and _ . To make their de nition more visible we arrange it in
form of truth tables.
^ t f _ t f
t t f t t t
f f f f t f
These truth tables are to be read in the following way. ^ and _ are binary truth
functions. The rst argument is in the vertical column left of the vertical line, the
second in the horizontal row above the horizontal line. The value stands at the crossing
point of the row of the rst and the column of the second argument.
A bit more subtle to de ne is the implication ! formalising the colloquial if : : :
then . The truth table of ! is
! t f
t t f
f t t
This is the way how implication is de ned in classical logic. The controversies about the
meaning of implication reach back to the times of Megarians and Stoics. What people
annoys is the `ex falso quodlibet' saying that ! (f; ) is always true independent from
the second argument. However, there are other interpretations of the colloquial if : : :
then statements leading to di erent kinds of logic and thus also to di erent kinds of
mathematics.
One example is intuitionistic logic which interprets if A, then B in such a way that
a proof for fact A can be converted in one for fact B. In this lecture we will restrict
ourselves to the classical interpretation of implication as given in the above truth table.
Usual mathematics is based on classical logic which uses the classical interpretation of
implication.
To study the theory of truth functions more generally we are going to introduce a
formal language. The alphabet consists of
 propositional variables, denoted by a; b; a0; : : :
 all connectives, i.e. names for all truth functions.
Now we are able to build up expressions only with respect to their sentential structure.
Think of propositional variables as of variables for propositions (or formulas). The
expressions build up only by this means are called sentential forms.
12 I. Pure Logic
De nition 1.2.2. We de ne the sentential forms inductively as follows.
1. Every propositional variable is a sentential form.
2. If A1 ; : : : ; An are sentential forms and ' is an n-ary connective (i.e. a name for
an n-ary truth function ' ), then ('A1 : : :An ) is a sentential form.
Sentential forms are denoted by A; B; C; A0; : : :
Think of sentential forms build up by :; ^; and _: Then (using in x notation)
 (:a)
 (((:a) ^ b) _ a)
are examples for sentential forms.
Now let's make some conventions to spare parentheses:
 outer parentheses will be cancelled
 : binds more than ^; _; ! and ^; _ more than !, i.e. we will write
:A ^ B ! C _ :C
(((:A) ^ B) ! (C _ (:C)))
 we will write
A1 ! A2 ! : : : ! An
(A1 ! (A2 ! (: : : ! An) : : :)):
This will also be the case if we replace ! by ^ or _.
Now we want to formalise that we think of propositional variables as variables for
propositions (which either can be true or false). Therefore we think that we have asso-
ciated a truth value with every propositional variable. Then we are able to determine
the truth value of a sentential form by successive application of the corresponding truth
functions.
A boolean assignment is a map B : A ! ft; f g; where A denotes the set of propositional
variables. Boolean assignments are denoted by B ; B 0 ; : : : We de ne the value B (A) of
a sentential form A induced by a boolean assignment B as follows.
De nition 1.2.3. We de ne B (A) for sentential forms A inductively. It is de ned
according to the de nition of the sentential forms.
1. If A 2 A , then B (A) is already given by the assignment.
2. If A = ('A1 : : :An ); where ' is a name for the n-ary truth function ' , then
B (A) = ' (B (A1 ); : : : ; B (An )).
1.2. Truth Functions 13
If B is a boolean assignment with B (a) = f and B (b) = t, then we obtain in the above
example
 B (:a) = : B (a) = t
 B (((:a ^ b) _ a)) = (( : B (a)) ^ B (b)) _ B (a)
= (( : f) ^ t) _ f
=t
Now we give a rst example for using `induction on the de nition'.
Proposition 1.2.4. If A is a sentential form and B is a boolean assignment, then
B (A) 2 ft; f g:
Proof by induction on the de nition of `A is a sentential form'.
1. If A 2 A , then B (A) 2 ft; f g according to the de nition of a boolean assignment.
2. If A = ('A1 : : :An ), then we have (B (A1 ); : : : ; B (An )) 2 ft; f gn by the induc-
tion hypothesis (which applies because A1 ; : : : ; An are previously de ned sen-
tential forms). Since ' is a name for an n-ary truth function ' and B (A) =
' (B (A1 ); : : : ; B (An )); it is B (A) 2 ft; f g.
Let A be a sentential form and fa1 ; : : : ; ang the set of propositional variables occurring
in A: It is obvious from De nition 1.2.3 that B (A) only depends on B  fa1 ; : : : ; ang
(i.e. B restricted to the nite set fa1 ; : : : ; ang). There are only 2n many boolean as-
signments which di er on fa1; : : : ; ang. This means that there is an obvious algorithm
for computing B (A) which consists in writing down B (a1 ); : : : ; B (an ) for the 2n many
assignments which di er on fa1; : : : ; ang and then computing B (A) according to the
truth tables for the functions represented by the connectives occurring in A. For a
precise formalisation of this fact cf. the appendix (Lemma A.1.11).
De nition 1.2.5. Let A and B be sentential forms. We say that A and B are sen-
tentially equivalent, written as A  B; if B (A) = B (B) for any boolean assignment
B.
Proposition 1.2.6.  is an equivalence relation on the sentential forms, i.e. we have
AA
A  B entails B  A
A  B and B  C entail A  C:
The following proposition gives a list of equivalent sentential forms.
Proposition 1.2.7.
a) A ^ B  B ^ A; A _ B  B _ A:
b) :(A ^ B)  :A _ :B; :(A _ B)  :A ^ :B:
14 I. Pure Logic
c) :(:A)  A:
d) (A ^ B) ^ C  A ^ (B ^ C); (A _ B) _ C  A _ (B _ C):
e) (A ^ B) _ C  (A _ C) ^ (B _ C); (A _ B) ^ C  (A ^ C) _ (B ^ C):
f) A ! B  :A _ B:
The proofs are obtained by mere computation of both sides.
At this point we want to single out some connectives, respectively some truth functions,
by which all other connectives, respectively truth functions, can be represented in a
way as ! is represented in Proposition 1.2.7 by : and _:
De nition 1.2.8.
a) A sentential form A1 ^ : : : ^ An, in which every Ai ; i = 1; : : : ; n is either a
propositional variable or of the form :ai for ai 2 A , is a pure conjunction.
b) Dually a sentential form A1 _ : : : _ An , where the Ai ; i = 1; : : : ; n are as above,
is called a pure disjunction (or clause).
c) A sentential form A1 ^ : : : ^ An, where all the Ai (i = 1; : : : ; n) are pure disjunc-
tions, is a conjunctive normal form.
d) Dually a sentential form A1 _ : : : _ An with pure conjunctions Ai (i = 1; : : : ; n)
is a disjunctive normal form.
The aim of the following theorem is to obtain an equivalent disjunctive normal form for
arbitrary sentential forms. How the normal form can be computed (not in the general
situation) will be performed in the following example:
(a _ :c) ^ (b _ c)  (a ^ (b _ c)) _ (:c ^ (b _ c))
 (a ^ b) _ (a ^ c) _ (:c ^ b) _ (:c ^ c)
 (a ^ b) _ (a ^ c) _ (:c ^ b)
using Proposition 1.2.7 and the fact B (:c ^ c) = f for all boolean assignments B .
Theorem 1.2.9. Let A be a sentential form. Then there is a disjunctive normal form
B such that A  B.
Proof. Let A be a sentential form. Then there are only nitely many propositional
variables, say a1 ; : : : ; an; occurring in A. We have 2n boolean assignments
B 1 ; : : : ; B 2n

## which di er on fa1; : : : ; ang. Now we de ne sentential forms

(
Aik = ak if B i (ak ) = t
:ak if B i (ak ) = f
for i = 1; : : : ; 2n and k = 1; : : : ; n: Here we have
B i (Aik ) = t
1.2. Truth Functions 15
and for i 6= j there is a k 2 f1; : : : ; ng such that
B i (ak ) 6= B j (ak ):
This entails
6
B j (Ajk ) = B i (Ajk )

and since B j (Ajk ) = t we have B i (Ajk ) = f: Fitting parts together we have for the
pure conjunctions
Ci = Ai1 ^ : : : ^ Ain; i = 1; : : : ; 2n
the fact that
B i (Cj ) = t i i = j

since B i (Cj ) = t just in the case if B i (Ajk ) = t for all k = 1; : : : ; n: Without loss of
generality we may assume that we have numbered the boolean assignments in such a
way that
B i (A) = t for i = 1; : : : ; m

and
B j (A) = f for j = m + 1; : : : ; 2n:

If m = 0 de ne
B = a0 ^ :a0 :
Then we have that B is a disjunctive normal form with
AB
since B (A) = f = B (B) for all boolean assignments B . If m 6= 0 de ne the disjunctive
normal form
B = C1 _ : : : _ Cm :
Then it is for i = 1; : : : ; m
B i (B) = B i (Ci ) = t = B i (A)
and for j = m + 1; : : : ; 2n
B j (B) = f = B j (A)
since it is
B j (C1 ) = : : : = B j (Cm ) = f:
So we have for all i = 1; : : : ; 2n
B i (A) = B i (B)
and we can conclude A  B:
16 I. Pure Logic
Corollary 1.2.10. For any sentential form A there is a conjunctive normal form B
such that A  B.
Proof. To prove the corollary we observe that, in view of Proposition 1.2.7 the nega-
tion of a disjunctive normal form is equivalent to a conjunctive normal form and vice
versa. Thus choose a disjunctive normal form B0 equivalent to :A which exists by
1.2.9. Then by 1.2.7 A  ::A  :B0 which by the above remark is equivalent to a
conjunctive normal form.
De nition 1.2.11. Let M be a set of connectives, i.e. names for some xed truth
functions. We call M complete if for every sentential form there is an equivalent
sentential form only containing connectives from M.
So we obtain as another immediate corollary of Theorem 1.2.9:
Theorem 1.2.12. f:; ^; _g; f:; _g and f:; ^g are complete sets of connectives.
Proof. From Theorem 1.2.9 we see that f:; ^; _g is complete. But according to 1.2.7
we can express ^ by : and _, and _ by : and ^.
At this point we have obtained a justi cation for taking only the connectives
^; _; :; !
into the alphabet of a rst order language (cf. De nition 1.1.1) since every other
connective can be represented by them.
Exercises
E 1.2.1 (She er stroke). The binary truth function j: ft; f g2 ! ft; f g is given by:
j t f
t f t
f t t
One may think of j as a connective. Prove that fjg is a complete set of connectives.
De ne the connectives :; !; _; ^ using only j.
E 1.2.2. The binary truth function
: ft; f g2 ! ft; f g is given by:

t f
t f f
f f t
One may think of
as a connective. Is f
g complete?
E 1.2.3. Let f
1 ; : : : ;
ng be a complete set of connectives. Prove or disprove that
f : 
1 ; : : : ; : 
n g is complete.
E 1.2.4. Prove: f^; _g is not complete.
E 1.2.5. Is f^; _; !g a complete set of connectives?
1.3. Semantics for First Order Logic 17
E 1.2.6. A king puts a prisoner to a severe test. He orders the prisoner into a room
with two doors. Behind each door there may be a tiger or a princess. Choosing the
door with the princess the prisoner will be set free. Otherwise he will be torn into
pieces by the tiger. Knowing that the prisoner is a logician, the king has mounted two
signs to the door.
The choice of the room doesn't The princess is in the other room.
make any di erence.

## He gives the following information to the prisoner:

\If the princess is behind the door on the left hand side, the sign at that door is
true. If there is the tiger so it's false. With the other door it is just the other
way round."
What door should be chosen by the prisoner?
a) Formalise the exercise.
b) Determine the equivalent disjunctive normal form and use it to help the prisoner
to make his decision.

## 1.3 Semantics for First Order Logic

Having discussed the meaning of the sentential connectives now we can turn to x the
meaning of the terms and formulas introduced in section 1.1. The rst step in that
direction is to tell the meaning of the non-logical symbols of a rst order language.
De nition 1.3.1. An L(C ; F ; P)-structure is given by a quadruple
S = (S; C ; F; P)
satisfying the following properties:
1. S is a non-void set. It is called the domain of the structure.
2. It is C = fcS : c 2 Cg  S:
3. It is F = ff S : f 2 Fg a set of functions on S such that f S : S n ! S if #f = n:
4. P = fP S : P 2 Pg is a set of predicates on S, i.e. for P 2 P with #P =
n it is P S  S n :
Let's give an easy example: Let LGT = L(CGT ; FGT ; PGT ) be the language of group
theory. In an LGT -structure we have interpretations for 1;  and =. Thus, if G =
(G; 1G; G) is a group we have an LGT -structure with domain G interpreting
 1 by 1G which is 1G 2 G
  by the function G which is G : G2 ! G; (x; y) 7! x G y
18 I. Pure Logic
 = by the predicate =G which is f(x; x) : x 2 Gg  G2
Thus every group is an LGT -structure. But we also have very strange LGT -structures,
e.g. if we have the following structure
 domain of the structure is IN
  is interpreted by the function f : IN2 ! IN; (x; y) 7! 2x
 = is interpreted by the predicate f(x; y) 2 IN2 : x < yg
This structure has of course nothing to do with a group.
Now we are going to give a meaning to the syntactical material of section 1.1 (i.e. terms
and formulas) with respect to a given structure, i.e. with respect to the meaning of the
non-logical symbols.
In a rst step we are going to assign elements of the domain of the structure to the
variables. By an assignment for an L(C ; F ; P )-structure S = (S; C ; F; P) we understand
a map
: V! S
where V denotes the set of object variables. Assignments are denoted by ; ; ; 0 ;
:::
Now let L = L(C ; F ; P ) be a rst order language, S = (S; C ; F; P) an L-structure and
an assignment for S : By  we have interpreted the variables in S : Now we can lift
the interpretation to all L-terms.
De nition 1.3.2. The value tS [] of an L-term t in the L-structure S with respect
to the assignment  is de ned by induction on the de nition of the L-terms as follows:
1. If t is the variable x, then tS [] = (x):
2. If t = c, then tS [] = cS :
3. If t = (ft1 : : :tn ); then tS [] = f S (tS1 []; : : : ; tSn []):
For an example regard the group G = (Z;0; +) of the integer numbers as an LGT -
structure. Take the term
t = (1  x)  y
and an assignment  for G with (x) = 5 and (y) = 3: Then it is
tG [] = 2:
Proposition 1.3.3. Let S be an L-structure and  an S -assignment.
a) tS [] 2 S for any L-term t.
b) If t is an L-term with FV(t) = ;, then tS [] = tS [ ] for all S -assignments  and
. In this case we write brie y tS :
Proof. We only prove the rst part at full length. This is an induction on the de nition
of `t is an L-term'. Therefore we have the following cases:
1.3. Semantics for First Order Logic 19
 t=x
Then it is tS [] = (x) 2 S by the de nition of :
 t=c2C
Then it is tS [] = cS 2 S by the de nition of cS :
 t = (ft1 : : :tn ) with L-terms t1 ; : : : ; tn and f 2 F
By the de nition of f S it is f S : S n ! S and by the induction hypothesis we
have
tS1 []; : : : ; tSn [] 2 S:
So it is
tS [] = f S (tS1 []; : : : ; tSn []) 2 S:
This nishes the induction.
The proof of the second part is an induction on the de nition of `t is an L-term', too.
There one need not consider the case t = x because of FV(t) = ;:
In the next step we are going to de ne the truth value ValS (F; ) of an L-formula F
under an assignment  for S . To simplify the de nition we introduce
x , 8y 2 V(x 6= y ) (y) = (y)):
This means that the assignments  and di er at most at the variable x:
De nition 1.3.4. We de ne ValS (F; ) by induction on the de nition of the formulas.
1. If F is an atomic formula (Pt1 : : :tn) we put
( S S S
ValS (F; ) = t if (t1 []; : : : ; tn []) 2 P
f otherwise
2. ValS (:F0; ) = : (ValS (F0; ))
3. ValS (F1 ^ F2; ) = ValS (F1 ; ) ^ ValS (F2 ; )
4. ValS (F1 _ F2; ) = ValS (F1 ; ) _ ValS (F2 ; )
(
5. ValS (8xF0 (x); ) = t if ValS (F0 ; ) = t for all x
f otherwise
(
6. ValS (9xF0 (x); ) = t if ValS (F0 ; ) = t for some x
f otherwise
Instead of ValS (F; ) = t we commonly write
S j= F[]:
Thus S 6j= F[] means ValS (F; ) = f: To make clauses 5. and 6. in De nition 1.3.4
better conceivable we are going to prove that both clauses meet the intuitive under-
standing of the quanti ers 8 and 9 (cf. Lemma 1.3.7). Though more perspicuous this
20 I. Pure Logic
alternative formulation has the disadvantage that it needs a bigger apparatus.
We denote by Fx (t) the string which is obtained from the string F by replacing all
occurrences of x by t, similar for sx (t):
Proposition 1.3.5. If F is an L-formula and t is an L-term such that
FV(t) \ BV(F) = ; and x 2= BV(F);
then Fx(t) is a formula with
FV(Fx (t))  FV(t) [ (FV(F) n fxg)
and BV(Fx (t))  BV(F):
Proof. This is left to the reader as an exercise.
From now on we always tacitly assume that
FV(t) \ BV(F) = ; and x 2= BV(F)
whenever we write Fx(t).
Lemma 1.3.6. Let s; t be L-terms, F an L-formula and S an L-structure. If  and
are S -assignments such that
x and (x) = tS [ ];
then sS [] = sx (t)S [ ] and ValS (F; ) = ValS (Fx (t); ):
Proof. First we show sS [] = sx(t)S [ ] by induction on the de nition of s.
 If s = y 6= x, then sS [] = (y) = (y) because x .
 If s = x, then sS [] = (x) = tS [ ] = sx (t)S [ ]:
 If s = c 2 C , then sS [] = cS = sx (t)S [ ]:
 If s = (fs1 : : :sn ), then by the induction hypothesis
sS [] = f S (sS1 []; : : : ; sSn [])
= f S (s1x (t)S [ ]; : : : ; snx(t)S [ ])
= sx (t)S [ ]:
Next we show ValS (F; ) = ValS (Fx (t); ) by induction on the de nition of F.
 If F = (Ps1 : : :sn ), then S j= F[] i
(sS1 []; : : : ; sSn []) 2 P S
which holds i
(s1x (t)S [ ]; : : : ; snx(t)S [ ]) 2 P S
by the rst part. But this means S j= Fx (t)[ ].
1.3. Semantics for First Order Logic 21
 If F = :F0, then
ValS (F; ) = : (ValS (F0; ))
= : (ValS F0x(t); )
= ValS (:F0x(t); )
where the equation between the second and the third term holds by induction
hypothesis. In the following we will indicate this by writing: =i:h: .
 If F = (F1  F2), where  is a connective ^; _; !, then
ValS (F; ) = ValS (F1; )  ValS (F2 ; )
=i:h: ValS (F1x(t); )  ValS (F2x(t); )
= ValS (Fx (t); ):
 If F = 8yG and S j= F[]; then S j= G[0] for all 0 y . We have x 2= BV(F).
6 y. Let 0 y . Then de ne
Thus x =
( 0
(z) = (z) for z 6= x
(z) for z = x:
Then y  because for z 6= y we have
(z) = 0 (z) = (z) = (z)
if z 6= x and (z) = (z) for z = x. Thus we have
S j= G[]:
We have x 0 by de nition and obtain
(x) = (x) = tS [ ] = tS [ 0 ]
since y 0 and y 2= FV(t) because of y 2 BV(F) and FV(t) \ BV(F) = ;:
Hence
S j= Gx(t)[ 0 ]
by induction hypothesis. Since 0 was an arbitrary assignment such that 0 y
this entails
S j= 8yGx (t)[ ]:
For the opposite direction assume
S j= 8yGx (t)[ ]:
Let 0 y : De ne ( 0
(z) =  (z) for z 6= x
(z) for z = x:
22 I. Pure Logic
Then y because we have for z 6= x
(z) = 0(z) = (z) = (z)
and (z) = (z) for z = x: Hence
S j= Gx(t)[]:
But x 0 and
0(x) = (x) = tS [ ] = tS []
because BV(F) \ FV(t) = ; and y 2 BV(F). Thus
S j= G[0]
by the induction hypothesis. Since 0 was arbitrary with 0y  this means
S j= 8yG:
 The case that F = 9yG is similar and left to the reader.
If S = (S; C ; F; P) is an L-structure we may extend L = L(C ; F ; P ) to a language
LS = L(C [ S; F ; P )
where S = fs : s 2 S g and expand S to an LS -structure
SS = (S; C [ S; F; P)
where sSS = s: It is obvious that any SS -assignment is also an S -assignment and vice
versa because an assignment only depends on the domain of a structure. Thus LS is
obtained from L by giving `names' to the elements of S.
Here look at an easy example. Let L = LGT the language of group theory and
S = (Z; 0; +) be the group of the integer numbers. Then S is the set fz : z 2 Zg
where z is nothing but a new constant symbol. In the expanded structure SS we
interpret z (which is thought to be a name for the object z) by the object z:
Lemma 1.3.7. Let F be an L-formula and S an L-structure. Then:
a) S j= 8xF [] i SS j= Fx(s)[] for all s 2 S:
b) S j= 9xF [] i SS j= Fx(s)[] for some s 2 S:
Proof. Before we start the proof we make the general observation that
S j= 8xF [] i SS j= 8xF []
because F is an L-formula (cf. Exercise E 1.3.5). To show the direction from left to
right in a) assume SS j= 8xF [] and choose an arbitrary s 2 S: De ne
(
(y) = (y) if y 6= x
s if y = x
1.3. Semantics for First Order Logic 23
Then x  and (x) = s = sSS []: Hence SS j= F[ ] which entails SS j= Fx (s)[] by
Lemma 1.3.6.
For the opposite direction assume SS j= Fx (s)[] for all s 2 S. Let x  and set
s = (x): Then SS j= Fx (s)[] which by 1.3.6 entails S j= F[ ]: Hence S j= 8xF []:
The proof of b) runs analogously.
We see from 1.3.7 that we really captured the colloquial meaning of 8 and 9 by De -
nition 1.3.3. Before we continue to investigate the semantical properties we introduce
some frequently used phrases.
De nition 1.3.8. Let L be a rst order language and M a set of L-formulas.
a) M is satis able in S if there is an S -assignment  such that S j= F[] for all
F 2 M:
b) M is valid in S if S j=F[] for all F 2 M and all S -assignments :
c) M is satis able or consistent if there is an L-structure S such that M is satis able
in S .
d) M is valid if M is valid in every L-structure S .
For a formula F we denote by S j= F that F is valid in S and by j= F that F is valid.
Let's illustrate this de nition by some examples. Therefore let LGT be the language
of group theory and G be a group (which is an LGT -structure, too).
 M = f(x = 1)g is satis able in G because if we take an G -assignment  with
(x) = 1G we have
G j= x = 1[]:
 If G is not a group with only one element M = f(x = 1)g is not valid in G because
if we take g 2 G with g = 6 1G and an G -assignment  with (x) = g we have
G 6j= x = 1[]:
Now, let AxGT be the set of `axioms' of group theory, i.e. the set of the following
formulas
8x8y8z(x  (y  z) = (x  y)  z)
8x(x  1 = x)
8x9y(x  y = 1)
Thus the LGT -structures S interpreting = by f(x; x) : x 2 S g with AxGT valid in S
are just the groups.
 AxGT [f8x8y(x  y = y  x)g is consistent because there are commutative groups.
It follows from De nition 1.3.4 that ValS (F; ) only depends on   FV(F). Thus we
have the following property:
24 I. Pure Logic
Proposition 1.3.9. Let S be an L-structure, F an L-formula and  and
S -assignments such that   FV(F) =  FV(F). Then it is
ValS (F; ) = ValS (F; ):
It follows from 1.3.9 that ValS (F; ) does not depend on  if F is a sentence. Thus
sentences have a xed truth value in an L-structure S . That is the reason for calling
them sentences. For sentences there is no di erence between satis ability and validity
with respect to a xed structure. If F is an L-sentence, then F is satis able in an
L-structure S i it is valid in S .
If F is an L-formula with FV(F)  fx1; : : : ; xng and  an assignment such that
(xi) = ai , then we often write
S j= F[a1; : : : ; an] instead of S j= F[]:
According to 1.3.9 ValS (F; ) is determined by a1 ; : : : ; an:
De nition 1.3.10. Let L be a rst order language and F; G L-formulas. We say that
F and G are semantically equivalent if
ValS (F; ) = ValS (G; )
for any L-structure S and any S -assignment . We denote semantical equivalence by
F S G.
Lemma 1.3.11. We have F S G i j= (F ! G) ^ (G ! F).
Proof. If F S G, then ValS (F; ) = ValS (G; ) for any structure S and any S -
assignment . According to the truth tables of ! this entails
ValS (F ! G; ) = t and ValS (G ! F; ) = t:
Hence j= (F ! G) ^ (G ! F): For the opposite direction assume
ValS (F; ) 6= ValS (G; )
for some S and S -assignment . If ValS (F; ) = t, then ValS (F ! G; ) = f: Hence
6j= (F ! G) ^ (G ! F):
If ValS (F; ) = f; then ValS (G ! F; ) = f and again we get
6j= (F ! G) ^ (G ! F):

## We de ne the connective \$ by F \$ G = (F ! G) ^ (G ! F). This means that the

truth function interpreting \$ is given by this combination of the truth functions for
conjunction and implication. Then 1.3.11 reads as
F S G i j= F \$ G:
1.3. Semantics for First Order Logic 25
Proposition 1.3.12. The semantical equivalence of L-formulas is an equivalence re-
lation on the L-formulas.

## The following proposition gives a list of semantically equivalent formulas.

Proposition 1.3.13.
a) F ^ G S G ^ F; F _ G S G _ F
b) :(F ^ G) S :F _ :G; :(F _ G) S :F ^ :G
c) :(:F) S F
d) (F ^ G) ^ H S F ^ (G ^ H); (F _ G) _ H S F _ (G _ H)
e) (F ^ G) _ H S (F _ H) ^ (G _ H); (F _ G) ^ H S (F ^ H) _ (G ^ H)
f) F ! G S :F _ G
g) :(9xF ) S 8x(:F); :(8xF ) S 9x(:F)
Proof. Claims a) to f) do obviously not depend on the quanti er structure of the
formulas involved. Thus a) to f) follow from 1.2.7 just because of the propositional
structure of the involved formulas. We are going to study the propositional structure
and properties of rst order formulas in the next section. Thus the precise argument
will be given by Corollary 1.4.6. Thus all we have to check is g).
Assume S j= :9xF [] for some L-structure S and an S -assignment . Then S 6j=
9xF [] which says that there is no S -assignment x  such that S j= F[ ]; i.e. we
have S j= :F[ ] for all S -assignments x : Hence S j= 8x:F[]:
If S j= 8x:F[], then S j= :F[ ] for all x  which shows that there is no
S -assignment x  such that S j= F[ ]: Hence S j= :9xF:
The second part follows from the rst by the computation
:8xF S :8x:(:F) S :(:9x:F) S 9x:F:

Exercises
E 1.3.1.
a) Prove: j= 8x(F ^ G) ! 8xF ^ 8xG:
b) Let L be a rst order language including a constant symbol 0: Determine L-
formulas F and G with
6j= 8x(F _ G) ! 8xF _ 8xG:
E 1.3.2. Let L be a rst order language and P a predicate symbol of L: Which of the
following formulas are valid?
a) (F ! G) ! ((F ! :G) ! :F)
26 I. Pure Logic
b) 8xF ! 9xF
c) 8y9xP yx ! 9x8yPyx
d) 9xF ^ 9xG ! 9x(F ^ G)
E 1.3.3.
a) Let S be an L-structure. Prove that if
S j= G ! F and x 2= FV(G)
so one has
S j= G ! 8xF
b) Is the condition x 2= FV(G) necessary? Prove your claim.
E 1.3.4. Prove Lemma 1.3.5. Hint: Show rst that for a term s also sx (t) is a term
with
FV(sx (t))  FV(t) [ (FV(s) n fxg):
Do we have in general
FV(Fx(t)) = FV(t) [ (FV(F) n fxg)?
E 1.3.5. Let F be an L-formula, S an L-structure and  an S -assignment. Prove:
S j= F[] , SS j= F[]:
E 1.3.6.
a) Determine a rst order language LV S suited for talking about a vector space and
its eld.
Hint: use a binary predicate symbol `='.
b) Formulate a theory (a set of sentences) TV S in the language LV S such that for
all LV S I-structures S (i.e. LV S -structures interpreting =S by f(s; s) : s 2 S g,
cf. also section 1.10) one has: S j= TV S , S consists of a eld and vector space
over this eld.
c) De ne the LV S I-structure S of the continuous functions over the eld R of the
real numbers.
d) Determine LV S -formulas F and G such that the following holds in all LV S I-
structures S with S j= TV S :
1. s1 ; : : : ; sn 2 S are linear independent , S j= F[s1; : : : ; sn]:
2. s1 ; : : : ; sn 2 S form a vector space basis , S j= G[s1 ; : : : ; sn ]:
E 1.3.7.
a) Let S be an L-structure and  an S -assignment. Prove:
S j= 9xF [] , S j= 9yFx (y)[]
if y 2= FV(F) [ BV(F):
1.4. Propositional Properties of First Order Logic 27
b) Is the condition y 2= FV(F) [ BV(F) in a) necessary? Prove your claim.
c) Let S be an L-structure and  a S -assignment. Now let F and F~ be two L-
formulas which are obtained by renaming bounded variables. Prove:
~
S j= F[] , S j= F[]:
E 1.3.8. Let F be an L-sentence with FV(F) = fx1; : : : ; xng. Prove for any L-
structure S it is
S j= F , S j= 8x1 : : : 8xnF:
E 1.3.9. Let LIN = (0; 1; +; ; <; =) be the language of number theory and N =
(IN; 0; 1; +; ;<; =) the LIN -structure of the natural numbers. Determine LIN-formulas
F and G such that
1. N j= F[n] , n 2 IN is prime
2. N j= G , there are in nitely many prime twins.

## 1.4 Propositional Properties of First Order Logic

As indicated in the proof of 1.3.13 there are semantical properties of rst order formulas
which do not depend on their quanti er structure. To make this explicit we devote
this section to the study of the propositional properties of rst order languages. To
have a starting point for our studies we de ne the propositional parts (PP(F)) of a
formula F:
De nition 1.4.1. For a formula F the set PP(F) is de ned by induction on the
de nition of the formulas.
1. If F is either atomic or a formula
QxG with Q 2 f8; 9g; then PP(F) = fF g:
2. If F is a formula :G, then PP(F) = PP(G) [ fF g:
3. If F is a formula G  H where  2 f^; _; !g, then
PP(F) = PP(G) [ PP(H) [ fF g:
A rst order formula F with PP(F) = fF g is a propositional atom . Thus propositional
atoms are atomic formulas or formulas beginning with a quanti er. Propositional
atoms are denoted by A; B; C; A0; : : : By PA(L) we denote the set of propositional
atoms of a language L: For an L-formula F we de ne
PA(F) = PA(L) \ PP(F)
and call PA(F) the set of propositional atoms of F: Let's illustrate these de nitions by
a formula of the language LGT of group theory: the formula F given by
(8x9y(x  y = 1) ^ 8x(x  1 = x)) ^ (x = y ! y = x)
28 I. Pure Logic
has the following propositional parts:
F; 8x9y(x  y = 1) ^ 8x(x  1 = x);
8x9y(x  y = 1); 8x(x  1 = x);
(x = y ! y = x); x = y; y = x:
The propositional atoms of F are given by
8x9y(x  y = 1); 8x(x  1 = x); x = y; y = x:
De nition 1.4.2. A boolean assignment for a rst order language L is a map
B : PA(L) ! ft; f g:

## For a boolean assignment B we de ne the value F B for an L-formula F inductively by

the following clauses:
1. If F 2 PA(L); then F B = B (F):
2. If F = :F0; then F B = : (F0B ):
3. If F = F1  F2 for  2 f^; _ !g, then F B = F1B  F2B :
Again it is obvious that F B 2 ft; f g and F B only depends on B  PA(F):
De nition 1.4.3. Let S be an L-structure If  is an S -assignment, then
B  : PA(L) ! ft; f g; B  (A) = ValS (A; )

## for A 2 PA(L) is the boolean assignment induced by :

Proposition 1.4.4. Let S be an L-structure and F an L formula. For any S -assign-
ment  we have F B = ValS (F; ):
To prove the lemma we show
G 2 PP(F) ) GB = ValS (G; )
by an easy induction on the de nition of G 2 PP(F): If G 2 PA(F) the claim follows
from the de nition of B  . The other cases, i.e. G = :G0; G = G1  G2 follow imme-
diately from the induction hypothesis. Since F 2 PP(F) this entails the claim of the
proposition.
Corollary 1.4.5. Let F be an L-formula. If F B = t for all boolean assignments B ,
then j= F:
At this point we have a precise justi cation for our argumentation in the proof of
Proposition 1.3.13:
Corollary 1.4.6. F  G implies F S G:
Proof. If we have F  G this means (F \$ G)B = t for all boolean assignments B . By
Corollary 1.4.5 and Proposition 1.3.11 it follows F S G:
1.4. Propositional Properties of First Order Logic 29
De nition 1.4.7. Let M be a set of L-formulas. We say:
a) M is sententially consistent if there is a boolean assignment B such that F B = t
for all F 2 M:
b) M is nitely sententially consistent if every nite subset of M is sententially
consistent.
c) M is maximally nitely sententially consistent if M is nitely sententially con-
sistent and maximal, which means that for any nitely sententially consistent set
M0 with M0  M it is M0 = M:
For example M = fA ^ :Ag is sententially inconsistent (not sententially consistent),
since we have for any boolean assignment B
(A ^ :A)B = B (A) ^ ( : B (A)) = f:
The following lemma gives a useful characterisation of maximally nitely sententially
consistent sets of formulas.
Lemma 1.4.8. Let M be a nitely sententially consistent set of L-formulas. Then
M is maximally nitely sententially consistent i for any L-formula F we either have
F 2 M or :F 2 M:
Proof. Since M is nitely sententially consistent we could not have any L-formula F
such that
fF; :F g  M:
So we just have to show that if there is an L-formula F such that F 2= M and :F 2= M
at most one of the sets
M [ fF g and M [ f:F g
is nitely sententially consistent. Towards a contradiction assume that this two sets
are not nitely sententially consistent, i.e. there are two nite subsets of M; say
fG1; : : : ; Gng and fH1; : : : ; Hm g
such that for all boolean assignments B we have
(1.1) (G1 ^ : : : ^ Gn ^ F)B = (H1 ^ : : : ^ Hm ^ :F)B = f:
Since M is nitely sententially consistent there is a boolean assignment B 0 such that
(G1 ^ : : : ^ Gn ^ H1 ^ : : : ^ Hm )B = t:
0

Because F B = t or (:F)B = t it is
0 0

## (G1 ^ : : : ^ Gn ^ F)B = t or (H1 ^ : : : ^ Hm ^ :F)B = t:

0 0

In the following proposition we meet a typical maximally nitely sententially consistent
set.
30 I. Pure Logic
Proposition 1.4.9. Let B be a boolean assignment. Then the set fF : F B = tg is
maximally nitely sentential consistent.
Proof. M = fF : F B = tg is by de nition sententially consistent and thus of course
also nitely sententially consistent. M is maximal since we either have
F B = t or
F B = f; i.e. (:F)B = t:
Less trivial and more important is the opposite of Proposition 1.4.9.
Lemma 1.4.10. Let M be a maximally nitely sententially consistent set of L-formu-
las. Then there is a boolean assignment B M such that
F 2 M , F BM = t
for all L-formulas F:
Proof. Let A 2 PA(L). By the maximality of M we either have A 2 M or :A 2 M:
Thus (
B M (A) =
t if A 2 M
f if :A 2 M
de nes a boolean assignment. We show
F BM = t , F 2 M
by induction on the de nition of the formula F: For F 2 PA(L) we have this by
de nition. Thus assume F 2= PA(L):
If F = :F0 and F BM = t, then F0BM = f which by induction hypothesis entails
F0 2= M: By the maximality of M we thus get :F0 2 M: If F BM = f, then F0BM = t
and thus F0 2 M: Assume :F0 2 M: Then fF0; :F0g  M which contradicts the
nitely sententially consistency of M. Hence :F0 2= M:
Let F = F1 ^ F2 and F BM = t: Then F1BM = F2BM = t and fF1; F2g  M by the
induction hypothesis. Assume F 2= M: Then :F 2 M by the maximality of M and we
have
f:(F1 ^ F2); F1; F2g  M
contradicting the nitely sententially consistency of M: Hence F 2 M: If F BM = f,
then we may assume without loss of generality that F1BM = f: This entails F1 2= M
by the induction hypothesis. Thus :F1 2 M which implies F 2= M because otherwise
f:F1; F g would be an S -inconsistent nite subset of M: The remaining cases are
similar (or may be reduced to the previous ones because f:; ^g is a complete set of
connectives).
As an immediate consequence of Lemma 1.4.10 we obtain the following properties of
maximally nitely sententially consistent sets.
Proposition 1.4.11. Let M be a maximally nitely sententially consistent set of L-
formulas. Then:
1.4. Propositional Properties of First Order Logic 31
a) (:F) 2 M , F 2= M
b) (F _ G) 2 M , F 2 M or G 2 M
c) (F ^ G) 2 M , F 2 M and G 2 M
d) (F ! G) 2 M , F 2= M or G 2 M
The proof is obvious by using the equivalence
F 2 M , F BM = t:

## Another important consequence of Lemma 1.4.10 is the next theorem.

Theorem 1.4.12. Every maximally nitely sententially consistent set is sententially
consistent.
This is obvious since by 1.4.10 B M is a boolean assignment making the formulas in M
true.
Theorem 1.4.12 will open us the possibility to `construct' boolean assignments for
nitely sententially consistent sets. All we have to do is to extend nitely sententially
consistent sets to maximal ones. There is an easy strategy to do that. Let us start with
a nitely sententially consistent set M: This set may still be very small. To enlarge it
we make a list, F0; F1; : : : of all L-formulas and enlarge M to sets Mk in steps.
In step k + 1 we check whether Mk [ fFk g is nitely sententially consistent.S If
it is de ne Mk+1 = Mk [ fFk g, otherwise put Mk+1 = Mk [ f:Fk g: Then fMk :
k runs through the listg is maximal and it will not be too dicult to prove that this
union is still nitely sententially consistent.
This strategy works perfectly for countable languages, i.e. for languages L which
produce a countable set of formulas. In general, however, L may not be countable.
In order to handle this case too, we have to use an axiom of set theory stating that
every set can be well-ordered. This principle is equivalent to the axiom of choice and
to Zorn's lemma (cf. Exercise E 1.4.2).
Well-ordering a set can be understood as to index its elements by ordinals. An
elementary treatment of the theory of ordinals can be found in the appendix. Naively
an ordinal can be viewed as a generalisation of the natural numbers. Ordinals allow us
to `count' in nite sets. We will not need the concept of an ordinal in its full generality
right now. All we need are the following facts:
1. Ordinals are well-ordered by a relation <
2. There are three types of ordinals:
a) 0
b) successor ordinals
c) limit ordinals
32 I. Pure Logic
0 is the least ordinal, the successor + 1 of an ordinal is the least ordinal which is
bigger than : Limit ordinals are those ordinals which are neither 0 nor successors.
A limit ordinal  is characterised by the fact that for <  we also have +1 < :
The least limit ordinal is called !. Below ! there are only 0 and successor ordinals.
The two important principles for ordinals are:
 Trans nite induction, which either can be formulated as
If F(0) and F( ) ! F( + 1) and (8 < )F() ! F()
for limit ordinals ; then 8F():
or just as
If 8 ((8 < )F() ! F( )); then 8F():
Here ; ; ; : : : are supposed to range over ordinals. Both formulations are
equivalent and from the rst it is obvious that trans nite induction along ordinals
is a straightforward generalisation of complete induction.
 Trans nite recursion, which (in a special form) says that there is a uniquely
determined function F from the ordinals into the sets satisfying the following
recursion equations
F(0) = X
F( + 1) = G(; F())
F() = H(; fF() :  < g)
for limit ordinals  where G and H are known functions and X is a set. Thus
trans nite recursion is nothing but a straightforward generalisation of the prin-
ciple of de nition by primitive recursion as it is known from the natural numbers
(cf. De nition 3.1.3).
We give a short introduction to the theory of ordinals in the appendix. The reader who
feels uncomfortable with trans nite induction and recursion may restrict himself to the
countable case where he only has to deal with natural numbers, primitive recursion
and complete induction.
Just for completeness we want to mention that in the framework of an axiomatic
set theory the principles of trans nite induction and recursion follow from axioms.
In the sequel we use the well-ordering theorem (which is, as we already mentioned,
equivalent to the axiom of choice) in the following form:
Theorem 1.4.13 (Well-ordering theorem). For any set M there is an ordinal 
and an one-one map f from M into .
The least such ordinal  is the cardinality of the set M: We will write card(M) for the
cardinality of M. Thus two sets M0; M1 have the same cardinality if there is a one-one
map from M0 onto M1 . The in nite cardinals are numbered by the @-function. So it
is
@0 = card(IN)
1.4. Propositional Properties of First Order Logic 33
the smallest in nite cardinal. @1 is the next one and so on. Sets whose cardinality is
 @0 are called countable. More details concerning cardinals and their arithmetic can
be found in the appendix.
Let us return to the propositional properties of rst order languages. We want to
show that we can extend a nitely sententially consistent set to a maximally nitely
sententially consistent set.
Lemma 1.4.14. Let M be a nitely sententially consistent set. Then there is a max-
imally nitely sententially consistent set M comprising M.
Proof. Let  be the cardinality of the set of L-formulas. Assume that (F )< is an
enumeration of the L-formulas. For  <  we de ne sets M by the following recursion
equations. Here we use the principle of trans nite recursion.
1. M0 = M
(
2. M +1 = M [ fF g if this set is nitely sententially consistent
M [ f:F g otherwise
S
3. M = < M for limit ordinals :
S
Put M = < M : M is maximal by de nition and we obviously have M  M: Thus
all we have to show is that M is nitely sententially consistent. For that is suces
to prove that every M is nitely sententially consistent and this will be shown by
trans nite induction on . M0Sis nitely sententially consistent by hypothesis. For
limit ordinals  we have M = < M . Thus any nite subset N  M is already a
subset of some M for  <  and therefore consistent by induction hypothesis.
So the only case which really needs work is the successor case. Thus assume that
 = + 1: If M [ fF g is nitely sententially consistent, then we are done since
this implies M +1 . So assume that M [ fF g is nitely sententially inconsistent.
Then there are formulas fG1; : : : ; Gng  M such that the set fG1; : : : ; Gn; F g
is sententially inconsistent. If we assume that M +1 , i.e. M [ f:F g, is nitely
sententially inconsistent, too, then we obtain formulas
fH1; : : : ; Hmg  M such that fH1; : : : ; Hm ; :F g
is sententially inconsistent. But fG1; : : : ; Gn; H1; : : : ; Hm g is a nite subset of M
which is sententially consistent by induction hypothesis. Thus there is a boolean
assignment B such that
GB1 = : : : = GBn = H1B = : : : = HmB = t:
Since either
F B or :F B = t
this implies that either fG1; : : : ; Gn; F g or fH1; : : : ; Hm ; :F g is sententially consis-
tent. A contradiction. So our assumption that M +1 is nitely sententially inconsistent
was wrong and we have shown that M +1 is nitely sententially consistent.
Summing up our results we get the following theorem.
34 I. Pure Logic
Theorem 1.4.15 (Compactness theorem for propositional logic). A set M of
L-formulas is sententially consistent i it is nitely sententially consistent.
Proof. Any sententially consistent set is of course also nitely sententially consistent.
If M is nitely sententially consistent, then we use Lemma 1.4.14 to obtain a max-
imally nitely sententially consistent set M which by Theorem 1.4.12 is sententially
consistent. Since M  M; M is sententially consistent, too.
It will be the aim of the next section to extend the compactness theorem of proposi-
tional logic to full rst order logic.
Exercises
E 1.4.1. There is a colouring of a map by k colours, if there is an association of the
k colours to the countries, such that
1. every country gets a colour,
2. neighboured countries get di erent colours.
Prove that an in nite map can be coloured by k colours, if this is true for every nite
sub-map.
E 1.4.2. A binary relation   X  X is called partial ordering if for all x; y; z 2 X
one has:
1. x  x (re exive)
2. x  y ^ y  z ! x  z (transitive)
3. x  y ^ y  x ! x = y (antisymmetric)
Y  X is called chain (w.r.t. ) if for x; y 2 Y
x  y _ y  x:
; 6= X is ordered inductively, if every -chain Y in X has an upper bound, i.e.
9x2X 8y2Y (y  x):
a) Use the well-ordering theorem and trans nite recursion to prove Zorn's lemma:
Every inductively ordered set has maximal elements, i.e.
9x2X 8y2X(x  y ! x = y):
b) Prove Lemma 1.4.15 using Zorn's lemma.
c) Let M be an sententially consistent set. Prove that there is a maximal senten-
tially consistent set M  M:
E 1.4.3. Prove that the following formulas are boolean valid, i.e. formulas F with
F B = t for all boolean assignments B :
a) (A ! (B ! C)) ! ((A ! B) ! (A ! C))
b) ((A ! B) ^ (A ! C)) ! (A ! (B ^ C))
1.5. The Compactness Theorem for First Order Logic 35
1.5 The Compactness Theorem for First Order Logic
The aim of this section is to extend the compactness theorem for propositional logic
to full rst order logic. The compactness theorem is due to Kurt Godel (1930) and
Anatolii I. Mal'cev [ 1909, y1967] (1936). The proof we give here follows that of
Leon Henkin [1921] from 1949.
To simplify the arguments we restrict our language to logical symbols :; _; 9. We
know from our previous work that this is sucient because :; _ is a complete set of
connectives and 8xF S :9x:F according to 1.3.13. Nevertheless we will still use the
other symbols, regarding them as de ned symbols, e.g.
F ! G means :F _ G
F ^ G means :(:F _ :G)
8xF means :9x:F
We start by studying some basic properties of quanti ers.
Proposition 1.5.1. Let F be an L-formula and t an L-term. Then
j= Fx (t) ! 9xF
and hence also j= 8xF ! Fx(t).
Proof. First we emphasise that both claims are indeed the same. Fx(t) ! 9xF stands
for :Fx (t) _ 9xF while 8xF ! Fx(t) is ::9x:F _ Fx (t): So j= 8xF ! Fx (t) follows
from j= Fx(t) ! 9xF taking :F instead of F.
Let S be an L-structure and  be an S -assignment. If S 6j= 9xF [], then S 6j= F[ ]
for all x : De ne (
(z) = (z) for z 6= x
tS [] for z = x:
Then x  and (x) = tS []. Thus by Lemma 1.3.6 we have
ValS (F; ) = ValS (Fx (t); )
which entails S 6j= Fx(t)[]. Hence S j= (:Fx(t) _ 9xF )[]:
De nition 1.5.2. Let M be a set of L-formulas.
a) We say that M contains all witnesses if for any formula 9xF there is a closed
term c9xF only depending on 9xF such that
9xF ! Fx(c9xF )
belongs to M.
b) We call M a Henkin set if M contains all witnesses and also all formulas
Fx (t) ! 9xF
where t is an arbitrary term.
36 I. Pure Logic
The term c9xF is supposed to witness the element x whose existence is claimed in 9xF:
In general there is a di erence between c9xF and c9yF . But there isn't any semantical
di ernce for the Henkin constants for
9xF and 9yFx (y):
This will be proved in the following proposition.
Proposition 1.5.3. Let M be a sententially consistent Henkin set and B a boolean
assignment such that
GB = t for all G 2 M:
If F and F~ are obtained by renaming bounded variables, then it is
F B = F~ B :
Proof. We make induction on the length of F: If F is not a propositional atom the
claim follows directly from the induction hypothesis. If F is atomic, then we have
F = F~ and of course it is
F B = F~ B :
So the only case to consider is that F = 9xG and F~ = 9yG~x (y) where G and G~ are
obtained by renaming bounded variables. By the induction hypothesis we have
Gx (c9xG)B = G~ x (c9xG)B
since Gx (c9xG ) and G~ x(c9xG ) are obtained by renaming bounded variables. But since
M contains all witnesses we have
9xG ! Gx(c9xG )
belonging to M: Now assume that F B = t: Then we have
Gx (c9xG)B = t
and so by the induction hypothesis
G~ x (c9xG)B = t:
Since also
G~ x (c9xG ) ! 9yG~x (y)
belongs to M, we conclude that
9yG~ x (y)B = t:
This means F~ B = t: Similar we obtain F B = t from F~ B = t:
Theorem 1.5.4. Let M be a Henkin set. M is sententially consistent i M is consis-
tent.
1.5. The Compactness Theorem for First Order Logic 37
Proof. It is obvious that the consistency of M also entails the sentential consistency
of M: For the opposite direction pick a boolean assignment B such that
F B = t for all F 2 M:
Assume that it is L = L(C ; F ; P ): Now we are going to construct an L-structure S and
an assignment  such that
S j= F[] for all F 2 M:
We de ne
S = ft : t is an L-termg:
For c 2 C let c = c: For f 2 F de ne f S (t1; : : : ; tn) = (ft1 : : :tn ) and for P 2 P put
S
P S = f(t1 ; : : : ; tn) : (Pt1 : : :tn)B = tg
This de nes a structure S = (S; C ; F; P) with
C = fcS : c 2 Cg; F = ff S : f 2 Fg and P = fP S : P 2 Pg:

## We obtain an S -assignment by de ning (x) = x: Then we obviously get tS [] = t

for every L-term t by induction on the length of t: Now we show ValS (F; ) = F B by
induction on the length of F:
1. If F = (Pt1 : : :tn ), then S j= F[] i (t1; : : : ; tn) 2 P S which by de nition holds
i (Pt1 : : :tn)B = t:
2. If F = :G, then ValS (F; ) = : ValS (G; ) =i:h: : (GB ) = (:G)B :
3. If F = (G _ H), then
ValS (F; ) = ValS (G; ) _ ValS (H; ) =i:h: GB _ H B = (G _ H)B :
4. Let F = 9xG and assume rst S j= F[]: Then there is an assignment x
such that S j= G[ ]: Pick t = (x): By renaming the bounded variables in F
we may assume that FV(t) \ BV(G) = ;: (cf. Exercise E 1.3.7 and Proposition
1.5.3). Then by Lemma 1.3.6 and Proposition 1.5.3
t = ValS (G; ) = ValS (Gx (t); );
i.e. S j= Gx (t)[]: Thus Gx (t)B = t by induction hypothesis which entails 9xGB =
t because Gx (t) ! 9xG is in M:
For the opposite direction assume 9xGB = t: Since M contains all witnesses this
entails Gx (c9xG)B = t: Hence S j= Gx (c9xG )[] by induction hypothesis. De ne
(
(z) = (z) for z 6= x
c9xG for z = x:
Then x  and (x) = (c9xG)S []: By 1.3.6 this entails S j= G[ ] and we get
S j= 9xG[]:
38 I. Pure Logic
One should observe that the key idea of the proof is a very simple one. Since we
have all formulas 9xF \$ Fx (c9xF ) in M we are able to eliminate the quanti ers in
the formulas of M: Thus every formula in M is equivalent to a quanti er free formula
which entails that sentential consistency and consistency coincide.
The problem which is left is to show that any set of formulas can be extended to
a Henkin set. This is of course impossible without extension of the language. Thus
assume that L = L(C ; F ; P ) is a rst order language. De ne
K0 = ;;
L0 = L;
Kn+1 = fc9xF : 9xF 2 Lng;
[
Ln+1 = L(C [ Ki ; F ; P);
in+1
where G 2 Ln means that G is a formula of Ln ; and c9xF is always a new constant
symbol which has not yet occurred. De ne
[ [
KH = Kn and LH = Ln:
n2IN n2IN
We call KH the set of Henkin constants and LH the Henkin extension for L: Every
Henkin constant c 2 KH and formula F 2 LH possesses a Henkin degree de ned by
degH (c) = minfn : c 2 Kn g
and
degH (F) = minfn : F 2 Ln g:
Thus constant symbols and formulas of the original language L are those with Henkin
degree 0: Then the Henkin set HL for a language L consists of all formulas
9xF ! Fx (c9xF )
and
Fx (t) ! 9xF
for F 2 LH and t a term in LH : Before we formulate the theorem about Henkin
extensions let us x some notations and phrases.
De nition 1.5.5. Let L1 = L(C1; F1; P1) and L2 = L(C2; F2; P2) be rst order lan-
guages.
a) We say that L1 is a sub-language of L2; written as L1  L2; if C1  C2; F1  F2
and P1  P2 :
b) If S2 = (S; C 2 ; F2 ; P2) is an L2-structure and L1  L2, then the L1 -retract of S2
is the structure S1 = (S; C 1 ; F1 ; P1), where C 1 = fcS : c 2 C1g; F1 = ff S : f 2
2 2

## F1g and P1 = fP S : P 2 P1 g: Vice versa we call S2 an L2 -expansion of S1:

2
1.5. The Compactness Theorem for First Order Logic 39
Now for example think of the language LGT of group theory, where we add a unary
function symbol 1 for the inverse function. Then we have LGT  LGT and in an
LGT -expansion of an L-structure we only have to add an interpretation for that new
function symbol.
In section 1.3 we just mentioned the expansion SS of an L-structure S by adding
constant symbols for the elements of S: In Exercise E 1.3.5 we saw that we have for
an L-formula F:
S j= F[] i SS j= F[]:
The same proof yields the following more general result:
Proposition 1.5.6. Let L1  L2 and S2 be an L2 -expansion of the L1-structure S1 :
Then  is an S1 -assignment i it is an S2-assignment and we have for all L1-formulas
F
S1 j= F[] i S2 j= F[]
for any S1 -assignment :
Now we come to the announced theorem.
Theorem 1.5.7. For any L-structure S and for any S -assignment  there is an LH -
expansion SH such that SH j= F[] for all F 2 HL:
Proof. Fix S = (S; C ; F; P) and : By recursion on n we de ne structures Sn such that
Sn expands Sm for n  m and Sn j= F[] holds for all formulas F in HL of Henkin
degree  n: Put
S0 = S :
If F 2 HL and degH (F) = 0, then F does not contain Henkin constants. Thus F
is one of the formulas Gx (t) ! 9xG which are valid in any structure by Proposition
1.5.1.
Now assume that Sn is de ned. We construct Sn+1 as an Ln+1 -expansion of Sn ;
i.e. we have to de ne [
cSn for c 2 C [
+1
Km :
mn+1
S n+1 S
Put c = c if degH (c)  n: If degH (c) = n+1, then there is a formula G 2 LnnL<n
n
such that c = cG : By the induction hypothesis Sn is an Ln-structure. If Sn j= 9xG[],
then there is an Sn -assignment x  with Sn j= G[ ] and we de ne (c9xG)Sn = (x):
+1

## Otherwise we put (c9xG)Sn 2 S arbitrarily. De ne

+1

[
C n+1 = fcSn : c 2 C [
+1
Km g
mn+1
and Sn+1 = (S; C n+1 ; F; P): We have to show that Sn+1 j= F[] for all F 2 HL such
that degH (F)  n+1: If degH (F)  n this follows from the induction hypothesis since
Sn is the Ln-retract of Sn+1: If F = Gx(t) ! 9xG we have Sn+1 j= F[] already by
1.5.1. Thus assume that
F = 9xG ! Gx (c9xG ) with degH (c9xG ) = n + 1:
40 I. Pure Logic
If Sn+1 6j= 9xG[], then trivially
Sn+1 j= 9xG ! Gx (c9xG)[]:
If otherwise
Sn+1 j= 9xG[]
we already have Sn j= 9xG[] because 9xG 2 Ln and Sn is the Ln-retract of Sn+1 :
Thus c9SxG
n = (x) for some   such that S j= G[ ]: By 1.3.6 this entails S
+1
x n n+1 j=
Gx (c9xG)[]: Altogether this yields
Sn+1 j= 9xG ! Gx (c9xG)[]:
S
Finally we de ne K = n2IN C n and SH = (S; K; F; P) which gives us the desired
expansion.
Theorem 1.5.8 (Compactness theorem for rst order logic). Let M be a set
of L-formulas such that every nite subset of M is consistent, then M is consistent.
Proof. Let M be a nitely consistent set of L-formulas. First we show M [ HL is
nitely consistent. If N  M [ HL is nite, then we may decompose N = M0 [ H0
where M0  M and H0  HL are both nite. Thus M0 is consistent which gives us an
L-structure S and an S -assignment with S j= F[] for all F 2 M0 : By 1.5.7 we have
an LH -expansion SH of S such that SH j= F[] for all F 2 H0: Thus SH j= F[] for
all F 2 N and we are done.
M [ HL is of course a Henkin set. The nite consistency of M [ HL trivially
entails the nite sentential consistency of M [ HL: By the compactness theorem for
propositional logic we obtain the sentential consistency of M [ HL which by 1.5.4
means that M [ HL is consistent. Since M  M [ HL this comprises the consistency
of M:
Now we are going to reformulate this theorem. Therefore we make the following
de nition (cf. also De nition 2.1.1).
De nition 1.5.9. Let L be a rst order language and M a set of L-sentences. An
L-structure S is a model of M, brie y S j= M, if we have S j= F for all F 2 M:
The following corollary is just the same as Theorem 1.5.8.
Corollary 1.5.10. Let M be a set of L-sentences. Then M has a model i every
nite subset of M has one.
Before we present some consequences of the compactness theorem we are going to take
a look at a consequence of the proof.
For a rst order language L = L(C ; F ; P ) we de ne
card(L) = card(C [ F [ P );
i.e. we mean by the cardinality of a language the cardinality of the set of its non-logical
symbols.
1.5. The Compactness Theorem for First Order Logic 41
Lemma 1.5.11. Let L be a rst order language. Then the Henkin extension LH has
the cardinality
card(LH ) = max(@0 ; card(L)):
Proof. This is indicated in the exercises for card(L)  @0: But the general statement
is obtained by a similar proof.
Theorem 1.5.12. Let M be a set of L-sentences. If M has a model, then M has a
model S of cardinality card(LH ); i.e. it is
card(S) = max(@0 ; card(L)):
Proof. For card(L)  @0 this will be proved in the exercises. Nearly the same proof
yields the theorem. Here we will restrict ourselves only by stating that the claim could
be extracted from the proof of the compactness theorem. The model S is the structure
of the same name constructed in the proof of Theorem 1.5.4.
The compactness theorem for rst order logic is one of the most important theorems
of pure logic. We are going to use it over and over again. Just to give a avor of
its power we want to sketch how to use it in the construction of a model of the real
numbers containing in nitesimals.
First let LIN = (0; 1; +; ; <; =) be the language of number theory. De ne
Th(N ) = fF 2 LIN : F is LIN -sentence and (IN; 0; 1; +; ; <; =) j= F g:
Then Th(N ) is consistent by de nition. Let c be a new constant symbol and de ne a
set of sentences
M = Th(N ) [ fn1 <c : n 2 INg
where n1 stands for the term |1+ :{z: :+1}. If N  M is a nite subset, then N = N0 [ N1
n-times
where
N0  Th(N ) and N1 = fn1 1<c; : : : ; nk 1<cg:
Put n = maxfn1; : : : ; nk g and de ne cN = n + 1: Then
N = (IN; 0; 1; n + 1; +; ; <; =)
is a model of N: Thus every nite subset of M has a model and by the compactness
theorem { therefore so has M. Call this model N : Then there is a `number' cN in the
domain of N which is larger than all n 2 IN and obviously N j= Th(N ).
But we promised to show that there is an extension of the reals containing in nites-
imals. To get this let L be the language of reals (we don't need to specify L further)
and denote the extended language which has constant symbols for all reals by LR .
Take the LR -expansion RR of R. It is
Th(RR) = fF 2 LR : F is LR -sentence and RR j= F g:
42 I. Pure Logic
Now put
M = Th(RR) [ fn1<c : n 2 INg
where c is a new constant symbol. Clearly M is nitely satis able and thus has a
model R: R contains cR which is bigger than all natural numbers and thus { recall
that R is an Archimedian eld { bigger than all reals. Conversely (cR) 1 is smaller
than all positive reals. We call an element 2 R nite if there is an r 2 R such that
j j < r:
2 R is in nitesimal if
j j < r

for all 0 < r 2 R: We say ; 2 R are in nitesimally close if j j is in nitesimal.
In this case we write
 :
Now we can prove the following result: For every nite 2 R there is an uniquely
determined r0 2 R with
 r0 :
To see that r0 exists we de ne
r0 = supfr 2 R : r < g:
Because is nite the set
fr 2 R : r < g
is bounded in R and so the supremum exists in R. This means
r0 2 R:
By the de nition of r0 it is
r0  r
for all 0 < r 2 R: Since also
r0 r 
for all 0 < r 2 R it is ( r0)  r: Hence
j r0j  r
for all 0 < r 2 R; which shows
 r0 :
That r0 is uniquely determined can be seen as follows: let  r0 and  r1 with
r0; r1 2 R: Then it is
0  jr0 r1j  j r0 j + j r1j  2r
for all 0 < r 2 R: Since r0 r1 2 R we must have r0 = r1: So we have a map
st : R ! R
1.5. The Compactness Theorem for First Order Logic 43
mapping a nite element 2 R on its standard part, i.e. onto the uniquely determined
r 2 R such that  r: Just to get an impression how to use in nitesimals, continuity
for functions on R is de ned by
f is continuous in x , 8"("  0 ! f(x + ")  f(x))
and the derivative is just
 
f 0 (x) = st f(x + ")" f(x) for some "  0:
We leave this topic now and advise the interested reader to look at, e.g. the book of
A. E. Hurd and P. A. Loeb: An Introduction to Nonstandard Real Analysis.
After this trip to the real numbers we will discuss the limitations for the expressive
power of rst order logic which are implied by the compactness theorem. Therefore
we give an example concerning group theory. If we want to express that a group has
the torsion property we would like to say
8x9n  1(xn = 1):
This seems to be expressible in the language of group theory. But in fact we have
made a not- rst-order statement. This may be clear by syntactical point of view since
xn abbreviates x|  :{z: :  x}
n-times
and the quanti er 9n  1 does not range over the elements of an intended structure
but over natural numbers. Therefore the torsion property would look like
8x(x = 1 _ x  x = 1 _ x  x  x = 1 _ : : :):
But this is an in nite expression and no rst order sentence. Now let us see why there
cannot be any set of rst order sentences expressing the torsion property: this is a
consequence of the compactness theorem. Therefore assume that there is a set M of
LGT -sentences such that for any group G
G j= M i G has the torsion property.
Now let AxGT be the set of section 1.3 such that for any LGT -structure S we have
S j= AxGT i S is a group
still assuming that = is interpreted by f(s; s) : s 2 S g; i.e. = is interpreted standardly.
Why we do not have to worry about standardness will be clari ed in section 1.10. Take
T be AxGT plus the sentences
:(cn = 1) for every n  1
44 I. Pure Logic
where c is a new constant symbol. But then every nite set of T [ M has a model
since there are only nitely many sentences containing the new constant symbol. All
we have to do is to nd a group with the torsion property in which c can be interpreted
by an element of order N, where N is bigger than all n such that
:(cn = 1)
occurs in the given nite set of sentences. Such a group is for example the direct sum
of all groups
Z=pZ
where p is a prime. So by the compactness theorem we nd a model of M [ T: And if
we assume that = is interpreted standardly, then the retract to LGT is a structure G
such that
G j= AxGT ; i.e. G is a group
and G has not the torsion property since for g = cG 2 G we know
gn 6= 1 for every n  1
since G j= T: So we have G j= M but G does not have the torsion property. In the
exercises we will give a lot of similar examples.
Pursuing the subject of logic we shall next study the notion of logical consequence.
Exercises
E 1.5.1. A partial ordering  on X (cf. E 1.4.2) is called linear if
8x; y2X(x  y _ y  x):
A linear ordering is a well-ordering if for any sequence (xn)n2IN  X there is a k 2 IN
with xk  xk+1: Prove that there is no L(; =)-theory T; such that for all LI-structures
(cf. E 1.3.6) S = (S; S ; =S )
S j= T ,S is a well-ordering on S:
E 1.5.2. Prove that there is no LV S -theory T (cf. E 1.3.6) such that for all LV S I-
structures S
S j= T , S is a nitely dimensional vector space over a eld.
E 1.5.3. Let LR= L(fx : x 2 Rg; ff : f : Rn ! R; n 2 INg; f=; <g), where R are
the real numbers. Let R be the LR-structure with domain R, where all symbols are
interpreted canonically. Prove that there is an LRI-structure R (cf. E 1.3.6) such
that for all LR-sentences F
R j= F , R j= F
and there is r 2 R with
R j= r<x[r]
for all r 2 R:
1.6. Logical Consequence 45
E 1.5.4. A language L = L(C ; F ; P) is countable if C [ F [ P is nite or countably
in nite. Let L be countable.
a) Prove that there are only countably many L-terms and L-formulas.
b) Prove that the Henkin-language LH is countable.
c) If M is a set of L-sentences and there is an L-structure S with S j= M, then
there is a countable L-structure S 0 with S 0 j= M:
E 1.5.5. Let TF be the theory of elds. TOF ; the theory of ordered elds is obtained
by TF adding a binary predicate symbol < and the following axioms:
1. 8x(:x < x)
2. 8x8y8z(x < y ^ y < z ! x < z)
3. 8x8y(x < y _ x = y _ y < x)
4. 8x8y8z(x < y ! x + z < y + z)
5. 8x8y8z(x < y ^ 0 < z ! x  z < y  z)
A structure S j= TOF is called Archimedian ordered of for any s 2 S there is an n 2 IN
such that
S j= x < 1| + :{z: : + 1}[s]
n-times
a) Prove that there is no theory T such that for all LI-structures S (cf. E 1.3.6)
S j= T , S is an Archimedian ordered eld.
b) If there is an LOF -sentence F with S j= F for all ordered non-Archimedian elds
S , so S 0 j= F for all ordered elds S 0 :
E 1.5.6. Let L be a rst order language. De ne a theory T such that for any L-
structure S
a) S j= T , S has 3 elements.
b) S j= T , S has in nitely many elements.
c) Is F an L-sentence such that for all in nite L-structures S one has S j= F, then
there is an m > 0, such that S 0 j= F for all L-structures S 0 with at least m
elements.

## 1.6 Logical Consequence

In the beginning we emphasised that mathematical logic was launched by the desire
to study mathematical reasoning. Up to now we studied just languages and some of
their models (with already surprising results as the re-justi cation of the in nitesimals
at the end of the previous section shows). So we think that now it is time to study
reasoning. First we need a precise de nition of reasoning, i.e. a de nition of a logical
consequence. In the following section we assume that L is some rst order language.
46 I. Pure Logic
De nition 1.6.1. Let M be a set of L-formulas, we say that a formula F is a logical
consequence of M, denoted by M j= F, if for any L-structure S and any S -assignment
with S j= M[] we also have S j= F[]: As a matter of convention we will always
write F1 ; : : : ; Fn j= F instead of fF1; : : : ; Fng j= F:
Proposition 1.6.2. F S G i F j= G and G j= F:
The proof of 1.6.2 follows directly from the de nitions of F S G and F j= G:
A very useful characterisation of logical consequence is given by
Lemma 1.6.3. M j= F i M [ f:F g is inconsistent.
Proof. M [ f:F g is consistent i there is an L-structure S and an S -assignment
such that S j= M[] and S 6j= F[]: But this is equivalent to M 6j= F:
Proposition 1.6.4 (Ex falso quodlibet). If M is an inconsistent set of L-formulas,
then M j= F for any L-formula F:
Proposition 1.6.5 (Compactness theorem, 2nd version). If M j= F, then there
is already a nite subset M0  M such that M0 j= F:
Proof. If M j= F, then M [ f:F g is inconsistent. By the compactness theorem we
nd already a nite subset M0  M [ f:F g which is inconsistent. Thus M0 [ f:F g
is inconsistent which entails M0 j= F:
Proposition 1.6.6 (Deduction theorem). M; G j= F i M j= G ! F:
Proof. Inconsistency of M [fG; :F g is equivalent to the inconsistency of M [fG^:F g
which because of :(G ! F) S G ^:F is the same as the inconsistency of M [f:(G !
F)g:
Since the inconsistency of M [f:F g and M  N entails the inconsistency of N [f:F g
we have the following theorem.
Proposition 1.6.7 (Monotonicity of rst order logic). M j= F and M  N im-
ply N j= F:
In the previous section we already used the symbol j= F to denote the validity of F in
any L-structure under each assignment. We obviously have S j= ;[] for all structures
S and S -assignments : Thus ; j= F entails j= F while the opposite direction holds
trivially. So we obtain the following easy consequences.
Proposition 1.6.8. If j= F, then M j= F for all sets of formulas M:
As M [ fF; :F g is always inconsistent we have the following observation.
Proposition 1.6.9. If F 2 M, then M j= F:
Another simple `rule' of inference is given by the next proposition.
Proposition 1.6.10 (Modus ponens). If M j= F and M j= F ! G, then M j= G:
1.6. Logical Consequence 47
Proof. Let S be an L-structure and  an S -assignment such that S j= M[]: By the
hypothesis this entails
S j= F[] and S j= (F ! G)[]:
Hence S j= G[]:
A bit more subtle to prove are the following quanti er rules.
Proposition 1.6.11.
a) If M j= G ! F and x 2= FV(M) [ FV(G), where
[
FV(M) = fFV(H) : H 2 M g;
then M j= G ! 8xF
b) If M j= F ! G and x 2= FV(M) [ FV(G), then M j= 9xF ! G:
Proof. Again both claims are dual. Thus it suces to prove b). Let S be an L-
structure and  an S -assignment such that
S j= M[]:
Then we have
S j= F ! G[]:
If S j= G[] we are done. Thus assume
S 6j= G[]
which entails that
S 6j= F[]:
If is any S -assignment such that x , then we have
S j= M[ ] i S j= M[]
as well as
S j= G[ ] i S j= G[]
because x 2= FV(M [ fGg): Hence S j= M[ ] and S 6j= G[ ] which by M j= F ! G
entails
S 6j= F[ ]:
Thus S 6j= 9xF [] and we obtain
S j= 9xF ! G[]:
This proves M j= 9xF ! G:
48 I. Pure Logic
Exercises
E 1.6.1. Prove the following `rules':
a) M j= 8x8yF i M j= 8y8xF
b) M j= 9x9yF i M j= 9y9xF
c) M j= 9x8yF implies M j= 8y9xF
E 1.6.2. Give a counterexample to the following `rule':
M j= 8x9yF implies M j= 9y8xF:
E 1.6.3. Which of the following `rules' are correct?
a) M j= 9xF ^ 9xG implies M j= 9x(F ^ G)
b) M j= 9x(F ^ G) implies M j= 9xF ^ 9xG
c) M j= 8xF ^ 8xG implies M j= 8x(F ^ G)
d) M j= 8x(F ^ G) implies M j= 8xF ^ 8xG:
E 1.6.4. Prove the following statement:
Mx (c) j= Fx(c) implies M j= F:

## 1.7 A Calculus for Logical Reasoning

In the previous section we have studied some of the basic properties of logical reasoning.
Summing these up we have:
1. M j= F for all valid formulas F
2. M j= F for F 2 M
3. M j= F; M j= F ! G ) M j= G
4. M j= F ! G and x 2= FV(M [ fGg) ) M j= 9xF ! G:
This de nes a formal calculus for the production of the logical consequences of the
formula set M: The natural questions to ask now are:
1. is this calculus complete, i.e. does it really produce all the consequences of M?
and if this is answered positively,
2. can this calculus be e ectivised, i.e. is it possible to replace clause 1. which still
refers to the semantical notion of a valid formula by one or more clauses which
only refer to the syntactical form of the involved formulas?
The notion of a valid formula seems to be a quite complicated one. To check whether
F is valid we have to test S j= F[] for all L-structures S and all S -assignments : So
the above question 2. aims at the construction of an e ective calculus which produces
all valid formulas and thus reduces the complicated notion of
1.7. A Calculus for Logical Reasoning 49
 for all S and all  S j= F[]
to the much simpler one of
 there is a deduction of F in the calculus.
In section 1.4 (Corollary 1.4.5) we have seen that every boolean valid formula, i.e. for-
mulas F with F B = t for all boolean assignments B , is valid. Since any given formula
has only nitely many propositional atoms there are only nitely many boolean as-
signments for the propositional atoms of F: Thus the boolean validity of a formula
can be e ectively checked. In section 1.5 (Proposition 1.5.1) we have seen that all
formulas of the shape Fx (t) ! 9xF and dually 8xF ! Fx(t) are valid. Being of the
shape Fx (t) ! 9xF , however, is a purely syntactical property which of course can
be e ectively checked. Thus a good attempt seems to be to replace clause 1. by the
clauses:
1. M j= F for every boolean valid formula F and
2. M j= Fx(t) ! 9xF for formulas F and suitable terms t:
Summing this up we obtain a syntactical notion M ` F of logical deduction by the
following de nition.
De nition 1.7.1. For a xed set M of formulas we de ne the relation M ` F induc-
tively.
1. M ` F if F is boolean valid (L-axioms)
2. M ` Fx (t) ! 9xF for all formulas F and terms t ( 9-axiom)
3. M ` F for F 2 M (M-axioms)
4. M ` F and M ` F ! G ) M ` G (modus ponens)
5. M ` F ! G and x 2= FV(M [ fGg) ) M ` 9xF ! G ( 9-rule)
For the sake of completeness we also mention the dual axiom
 M ` 8xF ! Fx(t) (8-axiom)
and the dual rule
 M ` G ! F and x 2= FV(M [ fGg) ) M ` G ! 8xF (8-rule)
though, due to the restriction of the logical symbols to :; _; 9; they are not really basic
axioms or rules. As a rst consequence we obtain
Theorem 1.7.2 (Soundness theorem). M ` F entails M j= F:
Proof. We prove the theorem by induction on the de nition of M ` F: Cases 1. and
2. follow from 1.4.5, 1.5.1 and 3. follows from 1.6.9 and 4. and 5. from the induction
hypothesis and 1.6.10 or 1.6.11, respectively.
The opposite direction of 1.7.2, which is the real aim of this section, still needs some
preparation.
50 I. Pure Logic
Proposition 1.7.3. Let (F1 ! : : : ! (Fn ! G) : : :) be a boolean valid formula. If
M ` Fi for i = 1; : : : ; n, then M ` G: (We refer to inferences of this kind as `boolean
inferences'.)
Proof. Obvious by n-fold application of modus ponens.
Proposition 1.7.4.
a) M ` G ! F and M ` :G ! F imply M ` F
b) M ` (G ! H) ! F entails M ` :G ! F and M ` H ! F
c) If M ` (9xF ! F) ! G and x 2= FV(M [ fGg), then M ` G:
Proof.
a) The formula (G ! F) ! ((:G ! F) ! F) is boolean valid. Hence M ` F by a
boolean inference according to 1.7.3.
b) Both formulas ((G ! H) ! F) ! (:G ! F) as well as ((G ! H) ! F) !
(H ! F) are boolean valid. Apply 1.7.3.
c) M ` (9xF ! F) ! G entails M ` :9xF ! G and M ` F ! G by b) Thus we
have by an application of the 9-rule also M ` 9xF ! G and obtain M ` G by
a).
Here we are able to give a positive answer to the rst question: the calculus is complete.
For a calculus of a similar type such a result has been observed by K. Godel for the
rst time in 1930 and A. I. Mal'cev in 1936. Our proof is mainly based on the ideas
of L. Henkin in 1949.
Theorem 1.7.5 (Completeness theorem). M j= F entails M ` F:
Proof. M j= F entails the inconsistency of M [ f:F g: But then the set
M [ HL [ f:F g
is inconsistent, too, where HL was the Henkin set de ned in section 1.5. Since M [
HL [ f:F g is a Henkin set we have that it is also s.-inconsistent and we apply the
compactness theorem for sentential logic to obtain nite sets fF1; : : : ; Fng  M and
fH1; : : : ; Hmg  HL such that
fF1; : : : ; Fn; H1; : : : ; Hm ; :F g
is s.-inconsistent. This, however, means that the formula
F1 ! : : : ! Fn ! H1 ! : : : ! Hm ! F
is boolean valid. Formulas in HL are either for the form
 Gx(t) ! 9xG or
 9xG ! Gx(c9xG ):
1.7. A Calculus for Logical Reasoning 51
Let us assume that we have numbered the Hi's in such a way that we have
 Hi = Gix (ti ) ! 9xGi for i = 1; : : : ; k
 Hj = 9xGj ! Gjx(c9xGj ) for j = k + 1; : : : ; m
and Hk+1; : : : ; Hm are ordered in such a way that we have
degH (Gj )  degH (Gj +1):
Therefore we can be sure that
c9xGj does not occur in Hj +1; : : : ; Hm :
Now we replace any Henkin constant successively by a new variable. The result
F1 ! : : : ! Fn ! H10 ! : : : ! Hm0 ! F
is still boolean valid. Because of Fi 2 M for i = 1; : : : ; n we have M ` Fi and get
M ` H10 ! : : : ! Hm0 ! F:
Since we have for i = 1; : : : ; k
M ` Hi0
by an 9-axiom we have using a boolean inference
M ` Hk0 +1 ! : : : ! Hm0 ! F:
But this is in fact nothing but
M ` (9xGk+1 ! Gk+1) ! Hk0 +2 ! : : : ! Hm0 ! F:
We may assume that
x 2= FV(M [ fHk0 +2; : : : ; Hm0 ; F g):
By Proposition 1.7.4 we obtain
M ` Hk0 +2 ! : : : ! Hm0 ! F:
Doing the last procedure (m k)-times nally we get
M ` F:

## Now we are able to give a nice characterization of inconsistent sets of formulas.

Corollary 1.7.6. Let M be a set of L-formulas. M is inconsistent i M ` F for all
L-formulas F:
52 I. Pure Logic
Proof. It is obvious if we have M ` F for all formulas F that there is no L-structure
S and no S -assignment  with
S j= M[]:
For the other direction assume that M is inconsistent. By Proposition 1.6.4 we have
M j= F for any formula F. The completeness theorem gives the desired result.
We close this section by listing some of the properties of the syntactical calculus.
Proposition 1.7.7.
a) F S G i ; ` F \$ G (i:e: ; ` (F ! G) ^ (G ! F))
b) M ` F ! G ! H i M ` F ^ G ! H
c) M ` F1 ! G and M ` F2 ! G imply M ` F1 _ F2 ! G
d) M ` F ! G1 and M ` F ! G2 imply M ` F ! G1 ^ G2
e) ; ` :8xF \$ 9x:F and ; ` :9xF \$ 8x:F
f) M ` F ! G and M ` G ! H imply M ` F ! H
g) M ` G ! Fx (t) implies M ` G ! 9xF
h) M ` Fx (t) ! G implies M ` 8xF ! G
All proofs are obvious and left as exercises.

Exercises
E 1.7.1.
a) Let M be a set of L-formulas with x 2= FV(M) and M ` F: Prove that M ` Fx (t)
for any L-term t with FV(t) \ BV(F) = ;:
b) Is x 2= FV(M) a necessary condition? Prove your claim.
E 1.7.2. Give derivations ; ` F of the following formulas where P; Q; R are predicate
symbols.
a) 9x8yPxy ! 8y9xP xy
b) 8x(Qx ! Rx) ! (8xQx ! 8xRx):
E 1.7.3.
a) ; ` 9xF \$ 9yFx (y) for y 2= FV(F)
b) Let F; F~ be two formulas which are obtained by renaming bound variables. Prove:
1) ; ` F \$ F~
2) M ` F i M ` F: ~
c) Mx (c) ` Fx(c) implies M ` F
E 1.7.4. Prove Proposition 1.7.7.
E 1.7.5. In the proof of the completeness theorem we demanded Hk+1; : : : ; Hm to be
ordered in a certain way. Is this necessary?
1.8. A Cut Free Calculus for First Order Logic 53
1.8 A Cut Free Calculus for First Order Logic
The calculus just developed is usually called a `Hilbert style' calculus because similar
calculi already were proposed by David Hilbert [ 1862, y1943]. Again we want to
emphasise that developing this calculus we achieved a considerable reduction of the
complexity of the notion of logical validity and logical consequence. In order to check
whether a formula belongs to the set
Con(M) = fF : M j= F g
of logical consequences of a given set M we have, according to the de nition, to consider
all L-structures S and all S -assignments : Then we have to check S j= M[] and in
case that this holds also S j= F[]: This seems to be a quite complicated procedure.
For the syntactical calculus, however, this becomes much simpler. We may even
construct a (hypothetical) machine, which works according to the rules of the Hilbert
style calculus, and this machine will, step by step, produce all logical consequences of
M (provided that M is given in a form which allows to be stored in the machine). To
check whether F is a consequence of M we just have to wait until F appears in the
list produced by the calculus. If F really belongs to Con(M) we eventually will get
the answer `Yes'. However, if F 2= Con(M), we don't get any answer. All we know is
that F did not yet show up. But we never can be sure whether F will show up in the
future or not. Thus we do not really have a decision procedure. Later on we will show
that such a decision procedure cannot exist (cf. Theorem 3.8.2).
In this section, however, we want to introduce a calculus which allows us to search
for a proof of F: In order to develop this calculus we need some observations. The
compactness theorem which is a deep theorem for the semantical notion of logical
consequence is obviously trivial for the syntactical notion of logical consequence:
M ` F implies fF1; : : : ; Fng ` F for some fF1; : : : ; Fng  M:
In a derivation of M ` F we can use only nitely many M-axioms. This proves that the
compactness theorem also follows from the completeness theorem. We did it the other
way round and deduced the completeness theorem from the compactness theorem (in
fact the compactness theorem for sentential logic was sucient). Thus there is a nite
subset fF1; : : : ; Fng  M such that M ` F , fF1; : : : ; Fng ` F: So it would suce
to de ne fF1; : : : ; Fng ` F: By the deduction theorem this is equivalent to
fF1; : : : ; Fk g ` Fk+1 ! : : : ! Fn ! F
or
fF1; : : : ; Fkg ` :Fk+1 _ : : : _ :Fk _ F:
If we omit the _-symbol on the right hand we obtain gures of the form
(1.2) A1 ; : : : ; An ` S1 ; : : : ; Sm
whose semantical interpretation is
A1 ^ : : : ^ An j= S1 _ : : : _ Sm :
54 I. Pure Logic
According to Gerhard Gentzen [ 1909, y1945] gures of the form displayed in (1.2)
are called sequents . The formulas A1 ; : : : ; An are the antecedents while S1 ; : : : ; Sm
are the succedents of the sequent.
Gentzen used the notions of sequents to prove his famous `Hauptsatz' (1935) which
roughly says that for every derivable formula F there is already a derivation without
detours, which means that it only uses sub-formulas of F: Of course this cannot be true
in full generality. Since there are only nitely many sub-formulas of F this would give
us a decision procedure. We are going to explore what a derivation without detours
really means. We will, however, not do this in the original Gentzen-style, but use a
simpli cation of the calculus due to William W. Tait (1968). The observation he
F1; : : : ; Fm ` G1 ; : : : ; Gn
is { according to the deduction theorem { equivalent to
` :F1; : : : ; :Fm; G1; : : : ; Gn:
So there is indeed no need to distinguish between antecedent and succedent formulas.
All we need is to derive nite sets of formulas whose semantical meaning is that of the
disjunction of these formulas.
Negated formulas take over the role of antecedents, non-negated that of succedents.
But we can even dispense with the negation symbol and thus completely abolish the
di erence between antecedent and succedent formulas. We will follow this line because
then the calculus gets a beautiful symmetrical formulation which makes it easy to
handle. Thus we have to alter the language for the Tait-calculus a little.
De nition 1.8.1. Let L = L(C ; F ; P) a rst order language. The Tait-language LT
for the language L is given by the following alphabet:
1. Variables as usual. (This means as in De nition 1.1.1)
2. The sets C and F as usual.
3. The set P of predicate symbols together with a set P = fP : P 2 Pg: It is
#P = #P:
4. The sentential connectives ^; _ and the quanti ers 8; 9.
5. Auxiliary symbols as usual.
For example, the Tait-language for group theory obtains a binary predicate symbol =,
too. The intended meaning of = is, that it should be interpreted as 6=.
The grammars for forming terms and formulas are not altered. An L-structure S
is also a structure for the Tait-language for L: The interpretation of the new predicate
symbols is given by
P S = f(s1 ; : : : ; sn ) : (s1 ; : : : ; sn ) 2= P S g:
Thus P is a symbol for the complement of P: Although negation does not belong to
the alphabet of the Tait-language for L we may de ne it by the following clauses:
1.8. A Cut Free Calculus for First Order Logic 55
De nition 1.8.2. For a formula F of a Tait-language F is de ned by induction on
the de nition of the formulas.
1. (Pt1 : : :tn) = (Pt1 : : :tn ); (Pt1 : : :tn ) = (Pt1 : : :tn )
2. (F ^ G) = (F _ G); (F _ G) = (F ^ G)
3. (8xF ) = (9xF); (9xF ) = (8xF)
As an easy consequence of the de nition we get the syntactical property
(F) = F:
The intended meaning of F was that of :F: To see that we really met this intention
we prove the following proposition.
Proposition 1.8.3. Let S be an L-structure,  an S -assignment and F an L-formula.
Then it is
ValS (F; ) = : ValS (F; ):
Proof by induction on the de nition of F:
1. If F = Pt1 : : :tn , then F = Pt1 : : :tn and if F = Pt1 : : :tn , then F =
Pt1 : : :tn : By de nition it is
ValS (Pt1 : : :tn ; ) = : ValS (Pt1 : : :tn ; ):
2. All other cases follow from the induction hypothesis using Proposition 1.3.13.
In the following we are going to denote nite sets of formulas by capital Greek letters,
; ; ; : : :
De nition 1.8.4. We de ne the Tait-calculus for a language L. The relation T
is de ned inductively as follows. Let  be a set of LT -formulas.
1. If fP; P g   for an atomic LT -formula P, then T  (L-axiom)
2. T ; F and T ; G imply T ; F ^ G (^-rule)
3. T ; Fi for i = 1 or i = 2 implies T ; F1 _ F2 (_-rule)
S
4. T ; Fx(y) and y 2= FV() = fFV(F) : F 2 g implies
T ; 8xF (8-rule)
5. T ; Fx(t) for some term t implies T ; 9xF (9-rule)
The formulas shown explicitly in the conclusion are called the main formulas of the
inferences. Remark that, due to the fact that we derive sets of formulas, two occur-
rences of a particular formula are automatically contracted into one. Especially it is
possible for the main formula to occur in the premise of an inference. The typical
case for this phenomenon, which will become particularly important in the proof of
56 I. Pure Logic
the completeness theorem 1.8.15 and in Herbrand's theorem 1.9.8, is the following: If
we have derived T 9xF; Fx(t) we can conclude

T 9xF:
Now we give some technical results about the Tait-calculus which will be proven in the
exercises.
Proposition 1.8.5.
a) T ; F and F~ is obtained from F by renaming bounded variables implies
~
T ; F:
b) T  implies T x (t):
c) (Structural rule) T  implies T ; :
d) (_-inversion) T ; F0 _ F1 implies T ; F0; F1:
e) (^-inversion) T ; F0 ^ F1 implies T ; F0 and T ; F1:
In a rst step we show the soundness of the Tait-calculus. We de ne
_
: = fF : F 2 g S j=  i S j= 
and prove the following lemma.
Lemma 1.8.6. If T , then : is inconsistent.
Proof. We prove this lemma by induction on the de nition of T : In case of an
L-axiom we have fP; P g  : Thus fP; P g  : which entails the inconsistency
of :: To consider the following cases we rst have to make some remarks.
Let us assume that the last inference is of the form:
(1.3) T i; Fi for i 2 I ) T 0 ; F
where F is the main formula of that inference and  = 0 [ fF g: We want to restrict
ourselves only to the case that in (1.3) we have i = 0 =  which case is possible
by the above mentioned contractions. All the other cases are handled in a similar way
or can be reduced to that case by an application of the structural rule.
In the case of the ^-rule
T ; F and T ; G ) T ; (F ^ G)
we get the inconsistency of :; F and :; G by induction hypothesis. Let S be
an L-structure and  an S -assignment such that S j= :[]: Then S j= F[] and
S j= G[]: Hence S j= (F ^ G)[] which entails
S 6j= :; (F ^ G)[]:
1.8. A Cut Free Calculus for First Order Logic 57
So :; (F ^ G) is inconsistent. In case of the _-rule for i 2 f1; 2g
T ; Fi ) T ; F1 _ F2
we have by the induction hypothesis the inconsistency of ; Fi: Thus if S j= :[] we
get S j= Fi[] and therefore also S j= (F1 _ F2)[]: Hence :; (F1 _ F2) is inconsistent.
If we have
T ; Fx(y) ) T ; 8xF
according to the 8-rule, then :; Fx(y) is inconsistent by the induction hypothesis.
Assume S j= :[]: Then we have S j= Fx(y)[]: If is any S -assignment with y
we have since y 2= FV()
S j= :[ ]
and because :; Fx(y) is inconsistent it follows
S j= Fx (y)[ ]:
So up to now we have proven
S j= 8yFx (y)[]:
Using renaming bounded variables (cf. Exercise E 1.3.7) we obtain S j= 8xF [] which
proves the inconsistency of :; 8xF: Finally assume an instance
T ; Fx(t) ) T ; 9xF
of the 9-rule. Again we have the inconsistency of :; Fx(t) by induction hypothesis.
If S j= :[] we get S j= Fx (t)[]: Put
(
(z) = (z) for z 6= x
S
t [] for z = x:
Then x  and (x) = tS [] which entails S j= F[ ]: Thus S j= 9xF [] and
:; 9xF is inconsistent.
Theorem 1.8.7 (Soundness of the Tait-calculus). T F implies j= F:
Corollary 1.8.8.
a) T ; F entails : j= F
b) T ; F entails : ` F
Proof. We have : j= F i : [ fF g is inconsistent by 1.6.3. Thus we get a) by
1.8.6. b) follows from a) by the completeness theorem.
Our next aim is to show that j= F also entails T F; i.e. to prove the completeness of
the Tait-calculus. Since we already have a completeness theorem the simplest idea to
do this is to prove ` F ) T F by induction on the derivation of F in the Hilbert-
style calculus. It is not too hard to see that T F1; : : : ; Fn holds for all boolean valid
58 I. Pure Logic
formulas F1 _ : : : _ Fn : It is also easy to prove T Fx (t) ! 9yFx (y) and to show that
T F _ G entails T 9xF _ G for x 2= FV (G):
However, we cannot yet show that T F; T F _ G entails T G: So in order
to carry out this proof we need a rule
(cut) T ; F and T ; F ) T 
known as the cut-rule. The addition of the cut-rule will of course not spoil the correct-
ness of the calculus. Thus one way to get the completeness of T would be rst to
add the cut-rule and afterwards to try to get rid of it again. This is indeed a tractable
way and we are going to follow it in the exercises. Here we choose the di erent way
to prove directly the completeness of the Tait-calculus. For this we rst need some
notations. The idea of the proof is to de ne a search tree which searches for the proof
of a given formula F: Such an idea has been proposed by Kurt Schutte [ 1909] for
the rst time.
However, since the Tait-calculus derives sets of formulas we are forced to de ne
search trees for sets : The search has of course to follow a certain strategy which
investigates the formulas in the set : Therefore we need to order the formulas in 
such that we can handle them according to their order. We will therefore, for the
rest of this section, interpret  as a nite list. This means that there is a di erence
between  = (A; B; C) and 0 = (B; A; C):
De nition 1.8.9. Let  = (F1; : : : ; Fn) be a nite list of LT -formulas.
a) A formula F 2  is irreducible in  if it is atomic. Otherwise we call F a redex
of :
b) A redex in  is distinguished if it is leftmost, i.e. a redex Fj 2  is distinguished
if all Fi with i < j are irreducible.
c) The reductum r of  is obtained by cancelling the distinguished redex in :
To de ne the search tree for a nite list  of formulas we rst have to de ne the notion
of a tree. For  = (x = y; 8x(x  1 = x); 9y(y 6= 1)), where we wrote 6= instead of =
we have
 x = y irreducible
 8x(x  1 = x) the distinguished redex
and
 r = (x = y; 9y(y 6= 1));
where 9y(y 6= 1) is the distinguished redex in r : Here we are going to identify n 2 IN
with the ordinal
n = f0; : : : ; n 1g
as we do in the appendix. Instead of n 2 IN we will often write
n<!
because we identify IN with the least limit ordinal !:
1.8. A Cut Free Calculus for First Order Logic 59
De nition 1.8.10.
a) A number sequence is a map  : n ! ! where f0; : : : ; n 1g = n < !: We call
dom() = n also the length of : By !<! we denote the set of number sequences.
is an initial segment of , written as   , if (k) = (k) for all k 2 dom():
Number sequences are denoted by ; ; 0; : : :
b) A tree is a nonempty subset T  !<! which is closed under initial segments,
i.e. if    2 T, then  2 T:  2 T is called a node in T: A node  2 T
is topmost if _ hni 2= T for all n < !: (_ hni denotes the map with domain
dom()+1 such that _ hni(dom()) = n, i.e. if  = hs0 ; : : : ; sk i, then _ hni =
hs0 ; : : : ; sk ; ni):hi; the empty map, is a member of any tree. We call hi the root
of T:
c) Let f : ! ! be a map with dom(f) =  !: f  n is de ned by f  n(k) = f(k)
for k < n: f is a path in T if f  n 2 T holds for all n 2 dom(f). T is well-founded
if every path in T is nite, i.e.
8f 2 !! 9n(f  n 2= T):
f is a path through T if the path f is either topmost in T or dom(f) = !:

## h0; 0; 0i h0; 0; 1i h2; 5; 0i h2; 5; 3i

@@ @@
h0; 0i h1; 0i h2; 5i
QQ 
Qh0i h1i h2i

@@
hi

## Here we have the topmost nodes

h0; 0; 0i; h0; 0; 1i; h1; 0i; h2; 5; 0i; h2; 5; 3i:
Lemma 1.8.11 (Induction on a well-founded tree). Let T be a well-founded tree.
Assume
(1.4) 8(8n(_ hni 2 T ! (_ hni)) ! ()):
Then 8 2 T () where () is any `property' of the node .
60 I. Pure Logic
Proof. Assume that there is a  2 T such that :(): We de ne a  2 !! such that
is an in nite path in T such that :(  n) holds for all n  dom(): Put m = dom()
and (k) = (k) for k < m: For n  m we de ne  by recursion on n: By the induction
hypothesis we have :(  n): Thus by (1.4) the set
M = fk < ! :   n_ hki 2 T ^ :(  n_ hki)g
is not empty. De ne (n) = minM: Thus   n 2 T for all n < ! which contradicts the
well-foundedness of T:
For the de nition of the search tree we need the hypothesis that L is a countable
language. Assume that t0; t1 ; : : : is an enumeration of all L-terms.
De nition 1.8.12 (The search tree S ). Let L be a countable language and  be
a nite list of LT -formulas. We de ne the search tree S together with a labeling map
 which assigns nite lists of formulas to the nodes of S .
1. hi 2 S and (hi) = 
2. If  2 S and () is either irreducible or an L-axiom (when viewed as a set),
then  is topmost in S .
Thus, for the following de nitions, assume that  2 S and () is neither an axiom
nor irreducible. Let R be the distinguished redex in (). We have the following
cases:
3. R = (F0 ^ F1): Then _ hii 2 S for i 2 f0; 1g and
(_ hii) = ()r ; Fi:
4. R = (F0 _ F1): Then _ h0i 2 S : De ne
(_ h0i) = ()r ; F0; F1:
5. R = 8xF: Then _ h0i 2 S and
(_ h0i) = ()r ; Fx(y)
where y is the rst variable
S (in the xed enumerationSof the terms) which does
not occur in    =   () (i.e. y 2= FV(  ) = F 2 FV(F)).
6. R = 9xF: Then _ h0i 2 S and
(_ h0i) = ()r ; Fx(t); 9xF
where t is the rst term (in the xed enumeration) such that Fx(t) does not
occur in   :
A search path for  is a path through S : We say that a search path  contains a set
of formulas if = (  n) for some n  dom():
1.8. A Cut Free Calculus for First Order Logic 61
The search tree S for
 = ((x = y) ^ x = 1; (x = 1) _ x = 1)
is given by

 =  =1 =1
x y; x ;x h0; 0i h1; 0i x = 1 ( = 1) = 1
; x ;x

( = 1) _ = 1  =
x x ; x y h0i h1i ( = 1) _ = 1 = 1
x x ;x

@@
hi 

## Lemma 1.8.13 (Principal syntactic lemma). Let L be countable. Assume that

every search path for  contains an L-axiom. Then T :
Proof. Since every search path for  contains an axiom it is nite. Hence S is
well-founded. We show by bar induction that T () holds for all  2 S : Our rst
observation is
(A) If P is atomic and P 2 () for some  2 S , then P 2 () for all  2 S
such that   :
This is immediate by de nition since P can never be cancelled.
To prove the lemma we have the induction hypothesis
_ hni 2 S ) T (_ hni)
If _ hni 2= S and  2 S , then  is topmost in S : Thus  is a search path for
 which has to contain an axiom. But then already () has to be an axiom which
entails T (): If _ hni 2 S , then () is reducible. Let R be the distinguished
redex in (): We have to take cases on the shape of R:
1. If R = (F0 ^ F1), then _ hii 2 S for i 2 f0; 1g: Hence T ()r ; F0 ^ F1 by
an ^-inference. This, however, is T ():
2. If R = (F0 _ F1 ), then _ h0i 2 S : By the induction hypothesis we therefore
have T (_ h0i): We have (_ h0i) = ()r ; F0; F1 and obtain ()r ; R
by two _-inferences.
3. R = 8xF: Then T ()r ; Fx(y) where y does not occur in ()r : Hence
T () by a 8-inference.
62 I. Pure Logic
4. R = 9xF: Then T ()r ; Fx(t); 9xF by the induction hypothesis and we ob-
tain T () by an 9-inference.
Lemma 1.8.14 (Principal semantic lemma). Let L be countable and  be a -
nite list of formulas such that there is a search path  for  which does not contain
an axiom. Then there is an L-structure S and an S -assignment  with S 6j= F[] for
all [
F 2 f(  n) : n  dom()g:
Proof. To prove the lemma we need a couple of observations.
(B) If n  dom() and R is a redex in (  n), then there is an m  dom() such
that R is distinguished in (  m):
Proof. We induct on the number of redexes which proceed R in the list . If
this number is 0, then R is distinguished in (  n): Otherwise let Fi be the
distinguished redex in . Since  contains no axiom we have n 2 dom() and
Fi is not longer distinguished in (  n + 1) = (  n)r ; F; G for suited F and
G. Thus the number of redexes which proceed R in  has decreased and we get
the existence of some m  dom() such that R is distinguished in (  m) by
induction hypothesis.
(C) If m  dom() and (F0 ^ F1) 2 (  m), then there is an i 2 f0; 1g and an
n  dom() such that Fi 2 (  n):
Proof. By (B) there is an n  dom() such that (F0 ^ F1) is distinguished in
(  n): But then n 2 dom() and by de nition we have Fi 2 (  n + 1) for
i = 0; 1:
(D) If m  dom() and (F0 _ F1) 2 (  m), then there is an n  dom() such that
Fi 2 (  n) for i 2 f0; 1g:
Proof. Let n  dom() such that F0 _ F1 is distinguished in (  n): Then
n 2 dom() and F0; F1 2 (  n + 1):
(E) If m  dom() and (8xF ) 2 (  m), then there is a variable y 2= FV(8xF ) and
an n  dom() such that Fx (y) 2 (  n):
Proof. Let n0  dom() be such that 8xF is distinguished in (  n0 ): Then
n0 2 dom() and Fx (y) 2 (  n0 +1) where y 2= FV(  (  n)) with n = n0 +1.
Thus especially y 2= FV(8xF ) since 8xF 2   (  n):
(F) If m  dom() and (9xF ) 2 (  n), then for every L-term t there is an
mt  dom() such that Fx(t) 2 (  mt ):
Proof. Assume that 9xF is distinguished in (  n) for n  dom() by (B). Then
n 2 dom() and for some k Fx (tk ) 2 (  n + 1) as well as Fx(tj ) 2   (  n)
for all j < k: We show by induction on l  k that there is an ml  dom() such
that Fx(tj ) 2   (  ml ) for all j < l and fFx(tl ); 9xF g  (  ml ):
For l = k this is already clear. Thus assume that the claim is true for l: By (B)
1.8. A Cut Free Calculus for First Order Logic 63
we obtain an n0  dom() such that 9xF is distinguished in (  n0 ): Hence
n0 2 dom() and fFx (tl+1 ); 9xF g  (  n0 + 1) by de nition since tl+1 is the
rst term in the xed enumeration which does not occur in   (  n0):
We use properties (A)-(F) to prove the lemma. First we de ne an L = L(C ; F ; P )-
structure S = (S; C ; F; P) by
 S = ft : t is an L-termg
 C = fcS : c 2 Cg where cS = c
 F = ff S : f 2 Fg where f S (t1; : : : ; tn) = (ft1 : : :tn ) and
 P = fP S : P 2 Pg where
P S = f(t1 ; : : : ; tn ) : 9m  dom()Pt1 : : :tn 2 (  m)g:
An S -assignment  is de ned by (x) = x: This yields
(G) tS [] = t for all L-terms t and we show
(H) If m  dom() and F 2 (  m), then S 6j= F[]:
Proof by induction on the length of F.
1. If F = Pt1 : : :tn ; m  dom() and F 2 (  m), then Pt1 : : :tn 2= (  k) for
all k  dom() by (A) and the fact that  does not contain an axiom. Hence
(t1 ; : : : ; tn) 2= P S which by (G) entails S 6j= F[]:
2. If F = Pt1 : : :tn, then we obtain (t1 ; : : :tn) 2 P S by de nition. Hence S 6j= F[]
by the de nition of the Tait-language for L:
3. If F = F0 ^ F1, then by (C) there is an m  dom() such that Fi 2 (  m) for
i 2 f0; 1g: Hence S 6j= Fi [] by induction hypothesis which entails S 6j= F[]:
4. If F = F0 _ F1, then by (D) there is an m  dom() such that Fi 2 (  m) for
i = 0; 1: Hence S 6j= Fi [] for i 2 f0; 1g which entails S 6j= F[]:
5. If F = 8xG, then by (E) there is an m0  dom() such that Gx [y] 2 (  m0 ):
Hence S 6j= Gx [y][]: De ne
(
(z) = (z) for z 6= x
(y) for z = x:
Then S 6j= G[ ] which entails S 6j= 8xG:
6. If F = 9xG, then by (F) we have Gx (t) 2 (  mt ) for some mt  dom():
Assume x : Then let t = (x) and we obtain S 6j= Gx (t)[] by induction
hypothesis. Hence S 6j= G[ ] and we have S 6j= 9xG[]:
64 I. Pure Logic

The combination of the principal syntactical and semantical lemma yields the com-
pleteness theorem.
Theorem 1.8.15 (Completeness of the Tait-calculus). j= F1 _ : : : _ Fn implies
T F1; : : : ; Fn:
Proof. First we remark that we need not bother about the restriction of L being
countable in the proof of the principal syntactical and semantical lemma as we can
restrict the language L to those non-logical symbols which occur in F1; : : : ; Fn. Assume
that T F1 ; : : : ; Fn is false. Then, by the principal syntactic lemma, there is a search
path for F1; : : : ; Fn which does not contain an axiom. This, by the principal semantic
lemma, entails that there is a structure S and an S -assignment  with S 6j= Fi [] for
i = 1; : : : ; n: Hence 6j= F1 _ : : : _ Fn:
Assume that we have any calculus K: We call an inference rule
F1; : : : ; Fn ) G
admissible for K if K F1; : : : ; K Fn imply K G:
Theorem 1.8.16 (Weak form of Gentzen's Hauptsatz). The cut rule
F ! G; G ! H ) F ! H
Proof. Assume T F _ G and T G _ H: Then we have j= F _ G and j=
G _ H by the soundness theorem 1.8.7. Hence j= F _ H which by the completeness
theorem 1.8.15 entails T F; H: By two inferences (_) this entails T F _ H:
We call 1.8.16 the weak form of Gentzen's Hauptsatz, since we did not show that there
is a terminating procedure for the elimination of (cut). In the exercises we indicate
how such a procedure is obtainable. Dealing with cut elimination procedures is the
hard core of proof theoretical research.
Exercises
Let LT be a Tait-language. We de ne the length l(F) of a formula F inductively:
1. Is F atomic, so let l(F) = 0:
2. Is F = F0  F1 and 2f^; _g, so let l(F) = maxfl(F0); l(F1)g + 1:
3. Is F = QxG and Q2f8; 9g; so let l(F) = l(G) + 1:
1.8. A Cut Free Calculus for First Order Logic 65
E 1.8.1. Prove l(F) = l(F):
Now we are going to de ne a re nement of the Tait-calculus nk  for n; k2IN induc-
tively.
(Ax) Is P atomic and fP; P g  ; so nk  for all n; k2IN:
(^) If nk ; F0 and nk ; F1; then nk ; F0 ^ F1 for
0 1

## n > max(n0; n1):

(_) If nk ; Fi for i = 0 or i = 1; then nk ; F0 _ F1 for n > n0 :
0

0

0

0 1

## for n > max(n0 ; n1):

E 1.8.2.
n
a) k  ) nk x(t)
n
b) k  ) nk ;
n
c) k ; F0 _ F1 ) nk ; F0; F1:
n
d) k ; F0 ^ F1 ) nk ; F0 and nk ; F1
E 1.8.3.
2l(F )
a) 0 F; F:
2l(F ) + 3
b) 0 8xF _ Fx(t) and 20l(F ) + 3 Fx (t) _ 9xF
c) nk ; F _ G and nk ; F and l(F) < k ) nk ; G for
0 1

## n > max(n0 ; n1):

d) Let x 2= FV(; G):
1) nk ; G _ F ) nk + 3 ; G _ 8xF
2) nk ; F _ G ) nk + 3 ; 9xF _ G
E 1.8.4.
a) nk ; F and nk ; F and l(F) = k ) nk + n ; :
0 1 0 1

## Hint: Use induction on n0 + n1:

n
b) nk + 1  ) 2k :
66 I. Pure Logic
n
c) (Gentzen's Hauptsatz) De ne 20(n) = n and 2k(n+1) = 22k :
( )

n
Prove: nk  ) 02k :
( )

E 1.8.5. Let F be a boolean valid formula. Prove that there is an n 2 IN such that
n
0 F:
E 1.8.6. Let F be an LT -formula with ; ` F: Prove that there are n; k 2 IN such that
n
k F:
E 1.8.7. Use the preceding exercises to obtain a second prove for the completeness
theorem for the Tait-calculus:
j= F entails T F:
E 1.8.8. Determine the search tree for
 = ft1 6= t1 ; 9y(y = t1 ^ (Pt5 ^ Pt5))g;
where t0 ; t1; : : : is the xed enumeration of the terms.
1.9 Applications of Gentzen's Hauptsatz
As a rst application of Gentzen's Hauptsatz we want to show a theorem of Jacques
Herbrand [ 1908, y1931]. To prepare this we need the notion of an existential formula,
9-formula for short.
De nition 1.9.1. The class of 9-formulas is inductively de ned by the following
clauses.
1. Every atomic formula is an 9-formula.
2. If F and G are 9-formulas, so are (F ^ G) and (F _ G):
3. If F is an 9-formula, then so is 9xF:
Now let's introduce the problem behind Herbrand's theorem: think of a sentence 9xF
with F quanti er free and
T 9xF:
Then we know by the soundness of the Tait-calculus that we have
S j= 9xF
for any L-structure S : This means
SS j= Fx (s)
i.e. in the structure S we have a witness s 2 S for the formula 9xF: But in general
there is not one single term t such that the interpretation of t is a witness in any
1.9. Applications of Gentzen's Hauptsatz 67
structure S . This means we do not have a term t such that for all L-structures S we
have
S j= Fx (t):
The following lemma gives the way out in this special case: it is possible to nd a nite
set of terms so that in each case the witness can be taken to be the interpretation of
one of those terms. In Exercise E 1.9.4 we ask if it is possible to x an upper bound
for the number of terms witnessing the existential formula.
Lemma 1.9.2. Let ; F be a nite set of 9-formulas. If T ; 9xF, then there are
nitely many terms t1 ; : : : ; tn such that
T ; Fx(t1 ); : : : ; Fx(tn):
Proof. We show the lemma by induction on the length of the derivation T ; 9xF:
If T ; 9xF is an L-axiom, then T ; Fx(t1 ); : : : ; Fx (tn) is also an L-axiom. Thus
we may assume that T ; 9xF is the conclusion of one of the inferences. There are
two possibilities:
1. The main formula of the inference belongs to the set . Then we have the
possibilities of an (^); (_) or (9)-inference. (8)-inferences are excluded because
 is a set of 9-formulas. In the case of an ^-inference we have the premises
T 0; F0; 9xF and T 0; F1; 9xF
and obtain terms t1 ; : : : ; tk; s1 ; : : : ; sl with
T0; F0; Fx(t1 ); : : : ; Fx(tk ) and T 0; F1; Fx(s1 ); : : : ; Fx(sl )
by the induction hypothesis. Using the structural rule and an ^-inference we
obtain
T 0; F0 ^ F1; Fx(t1 ); : : : ; Fx(tk ); Fx (s1 ); : : : ; Fx(sl ):
That was claimed. We may treat the cases of an _- or 9-inference simultaneously.
Thus assume that we have the premise
0 ; G0; 9xF
T
which by either an _- or 9-inference leads to the conclusion
0; G; 9xF:
T
Then G0 is an 9-formula, too, and by induction hypothesis we have
0; G0; Fx(t1 ); : : : ; Fx(tn ):
T
Using the same inference this yields
T 0; G; Fx(t1 ); : : : ; Fx (tn):
68 I. Pure Logic
2. The main formula of the inference is 9xF: Then we have the premise
T ; Fx(t); 9xF
for some term t and obtain
T ; Fx(t); Fx(t1 ); : : : ; Fx(tm )
by the induction hypothesis.
Lemma 1.9.3 (Herbrand's lemma). If F is an 9-formula with j= 9xF , then there
are nitely many terms t1 ; : : : ; tn such that j= Fx(t1 ) _ : : : _ Fx(tn ):
Proof. From j= 9xF we obtain T 9xF by the completeness theorem 1.8.15. Using
Lemma 1.9.2 this yields T Fx(t1 ); : : : ; Fx(tn ) which, by the soundness theorem,
entails j= Fx (t1) _ : : : _ Fx (tn):
De nition 1.9.4. We say that a formula F is in prenex form if
F = Q1x1 : : : QnxnF0
where BV(F0 ) = ; and Qi 2 f9; 8g for i = 1; : : : ; n:
Theorem 1.9.5. For any formula F there is a formula FN in prenex form such that
F S FN :
Proof by induction on the length of the formula F:
1. For an atomic formula we put FN = F:
2. If F = G _ H, then we have by induction hypothesis formulas GN S G and
HN S H in prenex form. Let
GN = QG1 x1 : : : QGn xn G0
and
HN = QH1 y1 : : : QHm ym H0 :
Without loss of generality we may assume that
; = fx1; : : : ; xng \ fy1 ; : : : ; ym g
= fx1; : : : ; xng \ FV(H0 )
= fy1 ; : : : ; ym g \ FV(G0 ):
We put
FN = QG1 x1 : : : QGn xnQH1 y1 : : : QHm ym (G0 _ H0)
and have to show
(A) FN S F
(B) H _ 9xG S 9x(H _ G)
(C) H _ 8xG S 8x(H _ G)
1.9. Applications of Gentzen's Hauptsatz 69
for x 2= FV(H): Iterated application of (B) and (C) yields (A).
To prove (B) let S be an L-structure and  an S -assignment such that S j=
(H _ 9xG)[]: Then S j= H[] or S j= 9xG[]:
In the second case there is a x  such that S j= G[ ] which entails
S j= (H _ G)[ ]
and in the rst case we get
S j= (H _ G)[ ]
for any x  since x 2= FV(H): Hence
S j= 9x(H _ G)[]:
If S 6j= (H _ 9xG)[], then S 6j= H[] and S 6j= 9xG[] which entails
S 6j= 9x(H _ G)[]:
The proof of (C) is similar:
If S j= (H _ 8xG)[], then S j= H[] or S j= G[ ] for all x : Thus
S j= (H _ G)[ ]
for all x  because x 2= FV(H):
Hence S j= 8x(H _ G)[]: If S 6j= (H _ 8xG)[], then S 6j= H[] and S 6j= G[ ]
for some x : As x 2= FV(H) this again entails S 6j= (H _ G)[ ] and we get
S 6j= 8x(H _ G)[]:
3. F = :G: Then there is GN S G where GN = Q1 x1 : : : Qn xnG0 is in prenex
form. But then
F S :GN S Q 1x1 : : : Q n xn:G0 = FN
where 8 = 9 and 9 = 8: Obviously FN is in prenex form.
4. F = QxG for Q 2 f8; 9g: Then we have a prenex formula GN S G and obtain
F S QxGN and put FN = QxGN :
De nition 1.9.6. Let
F = 9x0 : : : 9xk 18xk Qk+1xk+1 : : : Qn xnG
be an L-formula in prenex form with G quanti er free. We extend the language L to
L by adding a new function symbol f and put
F  = 9x0 : : : 9xk 1Qk+1xk+1 : : : Qn xnGxk (fx0 : : :xk 1):
Let F (0) = F and L(0) = L and de ne F (n+1) = (F (n)) and L(n+1) = (L(n)) : Then
there is an n 2 IN such that F (n+1) = F (n) and thus also L(n+1) = L(n): Let m be the
least such n and put
FH = F (m) and LH = L(m) :
We call FH a Herbrand form of F and LH a Herbrand language for F: For a formula
F not in prenex form we de ne FH = (FN )H :
70 I. Pure Logic
At this point we want to give an easy example for computing the Herbrand form of a
formula. Therefore let
F = 8x9y(x  y = 1) ! 8x9y(y  x = 1)
in the language LGT of group theory. Because F is not in prenex form we compute it
by:
F S 9x18x2:(x1  x2 = 1) _ 8y1 9y2 (y2  y1 = 1)
S 9x18x28y1 9y2 (:(x1  x2 = 1) _ (y2  y1 = 1)):
This formula is in prenex form, say FN . Thus we have
(FN ) = 9x1 8y1 9y2 (:(x1  fx1 = 1) _ (y2  y1 = 1))
for a new function symbol f and
(FN )(2) = 9x19y2 (:(x1  fx1 = 1) _ (y2  gx1 = 1))
with g as a new function symbol. This is a Herbrand form of F:
Lemma 1.9.7. We have j= F i j= FH :
Proof. Since F S FN we may assume that F is in prenex form and it suces to
show
j= F , j= F 
because then the lemma follows by iteration. So let F = 9x0 : : : 9xk 18xk G where G
is in prenex form. Then
F  = 9x0 : : : 9xk 1Gxk (fx0 : : :xk 1):
j= 8xk G ! Gxk (fx0 : : :xk 1)
which entails
j= 9x0 : : : 9xk 18xk G ! 9x0 : : : 9xk 1Gxk (fx0 : : :xk 1):
Hence j= F ) j= F : For the opposite direction assume 6j= F: Then there is an L-
structure S and an S -assignment such that S 6j= F[], i.e.
S 6j= 9x0 : : : 9xk 18xk G[]:
Choose s0 ; : : : ; sk 1 2 S arbitrarily and de ne
(x) = (x) for x 2= fx0; : : : ; xk 1g
and (xi ) = si : Then we have S 6j= 8xk G[ ] which shows that there is an assignment
0 xk such that S 6j= G[ 0]: Put
f S  (s0 ; : : : ; sk 1) = 0(xk ):
1.9. Applications of Gentzen's Hauptsatz 71
This expands S to an L-structure S  : If we had
S  j= 9x0 : : :xk 1Gxk (fx0 : : :xk 1)[];
we could nd x ;::: ;xk  with
0 1

## S  j= Gxk (fx0 : : :xk 1)[ ]:

Therefore we had S j= G[ 0] with f S  (s0 ; : : : ; sk 1) = 0(xk ): Here we have written
si = (xi ): But this contradicts the de nition of f S : So we conclude
S  6j= 9x0 : : :xk 1Gxk (fx0 : : :xk 1)[] and S  6j= F []:
Theorem 1.9.8 (Herbrand's theorem). Let F be a sentence. Then we have j= F
i there are nitely many k-tuples
(t11 ; : : : ; t1k ); : : : ; (tn1 ; : : : ; tnk)
of LH -terms such that
j= Gx ;::: ;xk (t11 ; : : : ; t1k ) _ : : : _ Gx ;::: ;xk (tn1 ; : : : ; tnk)
1 1

## where it is FH = 9x1 : : : 9xk G and G is quanti er free.

Proof. By the previous lemma we have j= F ,j= FH : Thus it suces to show
j= FH ,j= Gx ;::: ;xk (t11 ; : : : ; t1k ) _ : : : _ Gx ;::: ;xk (tn1 ; : : : ; tnk):
1 1

To show the `(' direction, we apply the completeness theorem for the Tait-calculus to
get
1 1 n n
T Gx ;::: ;xk (t1; : : : ; tk ); : : : ; Gx ;::: ;xk (t1 ; : : : ; tk )
1 1

and conclude
T 9x1 : : : 9xk G
by applications of the 9-rule. For the opposite direction we assume j= FH and get
T 9x1 : : : 9xk G by the completeness theorem. Since 9x1 : : : 9xk G is an 9-formula we
may apply Lemma 1.9.2 to obtain
n
T 9x2 : : : 9xk Gx (t1 ); : : : ; 9x2 : : : 9xk Gx (t1 ):
1 1
1 1

## From this we get

n
T 9x2 : : : 9xk (Gx (t1 ) _ : : : _ Gx (t1 )):
1 1
1 1

## Applying 1.9.2 once more this yields

n 1
T 9x3 : : : 9xk (Gx ;x (t1; t2) _ : : : _ Gx ;x (t1 ; t2)); : : : ;
1 1 1
1 2 1 2

## 9x3 : : : 9xk (Gx ;x (t11 ; tn2 ) _ : : : _ Gx ;x (tn1 ; tn2 ))

1 2
2
1 2
1 2

Iterating this procedure and possibly adding dummy terms we nally get the claim.
At this point we nish the observations concerning Herbrand's theorem of 1930. We
are going to derive a second consequence of Gentzen's Hauptsatz: the interpolation
theorem.
72 I. Pure Logic
Theorem 1.9.9 (Interpolation for the Tait-calculus). Let  and be nite sets
of formulas in the Tait-language for L such that neither T  nor T : If T ; ,
then there is a formula E satisfying the following properties:
1: FV(E)  FV() \ FV( ):
2: E contains only predicate symbols which occur in formulas of  as well as in
formulas of : = fF : F 2 g:
3: T ; E and T E; :
We call E an interpolation formula for and :
Proof. We use induction on the de nition of T ; :
1. Assume T ;  by an L-axiom. Since 6 T and 6 T  there is an atomic
formula P with P 2  and P 2 (or P 2 and P 2 ). Set E = P (or
E = P).
2. Assume that the last inference was
T ; F0; ; T ; F1;  ) T ; 
where (F0 ^ F1 ) 2 : There are the following sub-cases
(a) T ; F0: Then 6 T ; F1 because otherwise we had T : Thus by
induction hypothesis there is an interpolation formula E for ; F1 and .
Since F1 is a sub-formula of (F0 ^ F1 ) 2 ; E obviously satis es properties 1.
and 2. for and . We have T E;  and T ; F1; E; and T ; F0
entails also T ; F0; E: Hence T ; E by an ^-inference and E is also
an interpolation formula for and :
(b) 6 T ; F0 and 6 T ; F1:
By induction hypothesis we have interpolation formulas E1 for ; F0 and 
and E2 for ; F1 and : For the same reasons as above E1 and E2 satisfy
properties 1. and 2. also for and : From T E1 ;  and T E2; 
we obtain T (E1 _ E2 );  and T ; F0; E1 and T ; F1; E2 rst
yield T ; F0; E1 _ E2 and T ; F1; E1 _ E2: Then we get by an ^-
inference T ; E1 _ E2. Thus (E1 _ E2) is an interpolation formula for 
and .
3. The last inference was
T ; Fi;  )` ; 
for i 2 f0; 1g and (F0 _ F1) 2 according to the _-rule. Then we have 6 T ; Fi
because otherwise we had T : By induction hypothesis there is an interpola-
tion formula E for ; Fi and  also satisfying 1. and 2. for the sets and :
From T ; Fi; E; however, we obtain T ; E by an _-inference. Thus E is
also an interpolation formula for and :
1.9. Applications of Gentzen's Hauptsatz 73
4. The last inference was
T ; Fx(y);  ) T ; 
according to the 8-rule where 8xF 2 : Again we have 6 T ; Fx(y) which by
induction hypothesis gives us an interpolation formula E for ; Fx(y) and . By
the variable condition we have y 2= FV( ; ) which by 1. entails that y 2= FV(E):
Thus we get from T ; Fx(y); E also T ; E by an 8-inference. It is obvious
that E also satis es 1. and 2. for  and and hence is an interpolation formula
for  and :
5. The last inference was an 9-inference
T ; Fx(t);  ) T ; 
with 9xF 2 : By induction hypothesis we have an interpolation formula D for
; Fx(t) and : Then D satis es property 2. also for  and since there are no
new predicate symbols in Fx (t):
Let
fy1; : : : ; yng = (FV(t) n FV( )) \ FV():
We have
T ; Fx(t); D and T D; :
By 9-inferences we thus obtain T ; D and T 8y1 : : : 8yn D; Since all of
the variables y1 ; : : : ; yn do not belong to FV( ) we apply 8-inferences to get
T ; 8y1 : : : 8yn D:
Putting E = 8y1 : : : 8yn D we see that E is an interpolation formula for  and
:
Annoyingly we also have to regard the case that the main formula of the last inferences
belongs to the set : Since most of the cases are dual to those already treated we can
be quite short.
6. T ; F0; ; T ; F1; ) T ; and (F0 ^ F1) 2 
(a) T F0;  entails 6 T F1; : Let E be an interpolation formula for and
F1; : Then T ; E and T E; F1; :
Thus T E;  by (^) and E interpolates also and :
(b) 6 T F0;  and 6 T F1 ;  give interpolation formulas E0 and E1: Thus
T ; Ei for i = 0; 1 and T E0 ; F0;  as well as T E1 ; F1; , then
E = E0 _ E1 interpolates and :
7. T ; Fi;  ) T ;  and (F0 _ F1 ) 2 : Then 6 T Fi ;  which gives E
with T ; E and T E; Fi ; : E also interpolates and :
74 I. Pure Logic
8. T Fx (y);  ) T ;  by an 8-inference and 8xF 2 : By the induction
hypothesis there is a formula E with T ; E and T E; Fx(y); : Since
y 2= FV(E) we also get E as an interpolation formula for and :
9. T ; Fx(t);  ) T ;  by an 9-inference and 9xF 2 : Then we have a
formula D with T ; D and T D; Fx (t); : Let
fy1; : : : ; yng = (FV(t) n FV()) \ FV( ):
By 9-inferences we rst get T ; 9y1 : : : 9yn D and T D; : As before
we have y1 ; : : : ; yn 2= FV() which yields T 8y1 : : : 8yn D; : Putting E =
9y1 : : : 9yn D we get an interpolation formula for and :
Corollary 1.9.10. If  and are nite sets of formulas such that T ; but
6 T  and 6 T , then  and have at least one common predicate symbol.
The hypotheses 6 T and 6 T  in the interpolation theorem for the Tait-calculus
are of course annoying. To get a general formulation of the interpolation theorem we
should try to get rid of them. This, however, is not easy. If we assume that we have
T , then we obviously obtain T ;  for any formula set , even if and  have
no predicate symbol in common. Thus the only possible interpolation formula is the
empty formula which is not yet available in our language. So let us try to introduce
a symbol, say ?, for the empty formula. ? may be viewed as a 0-placed predicate
symbol. Therefore we need its dual notion, say >, in the Tait-language. We de ne
? = > and > = ?: The L-axiom for this axiom then becomes ` ; >; ? which is
the same as
` ; > (>-axiom)
because ? is supposed to stand for the empty formula. The interpretation of the symbol
? is of course ValS (?; ) = f in all L-structures S : For this reason the interpretation
of the dual symbol in the Tait-language has to be
ValS (>; ) = t
for all L-structures S : Due to these interpretations we have that :( [ f>g) is always
inconsistent and so the soundness theorem for the Tait-calculus extends also to the
Tait-calculus with >-axiom. The proof of the completeness theorem remains literally
the same. So we have soundness and completeness for the Tait-calculus with >-axiom,
too. We rst observe that ? really behaves like the empty formula. We will use
the obvious notion >-Ax  to express that  is derivable in the Tait-calculus with
>-axiom.
Proposition 1.9.11. >-Ax ; ? implies >-Ax :
Proof. By induction on the de nition of >-Ax ; ?: ? can never be the main-
formula of the last inference. Therefore we get the claim immediately from the in-
duction hypothesis in case that >-Ax ; ? in conclusion of an inference. But if
1.9. Applications of Gentzen's Hauptsatz 75

## >-Ax ; ? is an axiom, then >-Ax  has to be an axiom, too.

Another easy consequence is that the Tait-calculus with >-axiom is a conservative
extension of the usual Tait-calculus. We will return to conservative extensions later
(cf. section 2.1). Therefore we do not give a general de nition of conservative exten-
sions but formulate the result as follows.
Proposition 1.9.12. Assume >-Ax  and  does neither contain the symbol >
nor ?: Then T :
Proof. The proof of the lemma is straightforward by induction on the de nition of
>-Ax :
We are now prepared to formulate the general version of the interpolation theorem.
Theorem 1.9.13. If >-Ax ; , then there is an interpolation formula for  and
:
Proof. In case that we have 6 >-Ax  and 6 >-Ax we use Proposition 1.9.11,
Proposition 1.9.12 and Theorem 1.9.9. Otherwise we either have >-Ax  or >-Ax
and obtain >-Ax ; ? or >-Ax ; ?; respectively. In the rst case we have the
interpolation formula ? because we have >-Ax >; by >-axiom and in the second
case we get for the same reasons > as interpolation formula.
As a consequence of Theorem 1.9.13 we get the famous theorem by William Craig
published in 1957.
Theorem 1.9.14 (Craig's interpolation theorem). If we have j= F ! G, then
there is a formula E which interpolates F and G, i.e. we have j= F ! E and j= E ! G
and E contains only predicate symbols which occur both in F and G: The free variables
of E also occur as well in F as in G:
Proof. If j= F ! G we get >-Ax F; G by the completeness theorem for the Tait-
calculus with >-axiom. By Theorem 1.9.13 there is an interpolation formula E for
F and G, i.e. we have F; E and >-Ax E; G which yield j= F ! E and
j= E ! G by the soundness>-Axtheorem. In Theorem 1.9.13 we have proved that E has
all the other properties stated in the claim.
There is a nice application of the interpolation theorem which is due to Abraham
Robinson [ 1918, y1974] in 1956.

## Theorem 1.9.15 (Joint consistency theorem). Let M1 and M2 be consistent sets

of L-sentences. Then M1 [ M2 is consistent i there is no sentence F such that M1 j= F
and M2 j= :F:
76 I. Pure Logic
Proof. If there is a sentence F such that M1 j= F and M2 j= :F it is M1 [ M2
inconsistent. For the other direction let M1 [ M2 be inconsistent. By the compactness
theorem there are nite subsets N1  M1 and N2  M2 such that
N1 [ N2 is inconsistent.
Now let F1 be the conjunction of the sentences in N1 and F2 of those in N2 . Thus we
have N1 j= :F2 which is
j= F1 ! :F2:
By Craig's interpolation theorem there is a sentence E such that
j= F1 ! E and j= E ! :F2:
But this implies
M1 j= E
since M1 j= F1 and F1 j= E: But also we have
M2 j= :E
because F2 j= :E and M2 j= F2 : This is because N1 [ N2 is inconsistent, which means
N2 j= :F1: So we have shown that there is a sentence E such that
M1 j= E and M2 j= :E:

## There is a sharper form of the interpolation theorem due to Roger C. Lyndon in

1959. This theorem also tells something about the form of occurrences of the predicate
symbols in the interpolation formula. To formulate the theorem we need the following
notion.
De nition 1.9.16. We de ne inductively the positive and negative occurrence of a
predicate symbol P in an L-formula F:
1. P occurs positively in Pt1 : : :tn :
2. If P occurs positively (negatively) in F, then P occurs negatively (positively) in
:F:
3. If P occurs positively (negatively) in F, then P occurs positively (negatively) in
(F _ G) and (G _ F):
4. If P occurs positively (negatively) in F, then P occurs positively (negatively) in
9xF:
For an L-formula F let us denote its translation into the Tait-language LT by F T :
Formally the translation is given inductively by
1. (Pt1 : : :tn )T = Pt1 : : :tn
2. (:F)T = (F T ) (cf. De nition 1.8.2)
1.9. Applications of Gentzen's Hauptsatz 77
3. (F _ G)T = F T _ GT
4. (9xF )T = 9xF T :
On the other hand any formula in the Tait-language LT is easily retranslated into an
L-formula translating an occurrence of Pt1 : : :tn into :Pt1 : : :tn : Positive and negative
occurrences of predicate are easy to locate in the Tait-language. We have the following
observation.
Proposition 1.9.17. P occurs positively (negatively) in F i P (P) occurs in F T :
The proof is an easy exercise.
Theorem 1.9.18 (Lyndon's interpolation theorem). If j= F ! G, then there is
an interpolation formula E for F and G such that every predicate symbol occurring
positively (negatively) occurs positively (negatively) in both formulas F and G:
Proof. The proof uses Proposition 1.9.17. By the interpolation theorem for the Tait-
calculus with >-axiom we get an interpolation formula for :F T and GT : Any predicate
symbol P occurring in E occurs as P in ::F T and GT and thus positively in F and
G and any predicate symbol P occurring in E occurs as P in F T and GT : The re-
translation of E into an L-formula transfers occurrences of P into positive occurrences
of P and occurrences of P into negative ones.
A sometimes useful modi cation of the interpolation theorem is the following one.
Theorem 1.9.19. If M j= F for some formula set M (not necessarily nite), then
there is a formula E with FV(E)  FV(M) \ FV(F) such that
M j= E and j= E ! F
and every predicate symbol occurring positively (negatively) in E also occurs positively
(negatively) in F and some formulas of M:
Proof. By the compactness and the deduction theorem we get
j= G1 ^ : : : ^ Gn ! F
for nitely many formulas fG1; : : : ; Gng  M: Application of Lyndon's interpolation
theorem proves the theorem.
Theorem 1.9.19 has the consequence that for proving a theorem F from the axiom sys-
tem Ax we need at most those axioms in Ax which tell something about the predicate
symbols occurring in F:
There are two nice applications of the interpolation theorem. The rst is Evert
W. Beth's de nability theorem which is already a consequence of Craig's interpolation
theorem.
78 I. Pure Logic
Theorem 1.9.20 (Beth's de nability theorem). We say that a formula F de nes
an n-ary predicate symbol P implicitly if we have
(1.5) j= F ^ FP (Q) ! 8x1 : : : 8xn((Px1 : : :xn ) \$ (Qx1 : : :xn)):
Let P be implicitly de ned by F: Then there is a formula G such that FV(G) 
fx1; : : : ; xng;
j= F ! 8x1 : : : 8xn(Px1 : : :xn \$ G)
and P does not occur in G: This means that G de nes P explicitly.
Proof. From (1.5) we get
j= F ^ Px1 : : :xn ! (FP (Q) ! Qx1 : : :xn):
By Craig's interpolation theorem there is an interpolation formula G such that
(1.6) j= F ^ Px1 : : :xn ! G
and
(1.7) j= G ! (FP (Q) ! Qx1 : : :xn):
We have FV(G)  fx1; : : : ; xng and neither P nor Q occur in G: Thus we get from
(1.7)
(1.8) j= G ! (F ! Px1 : : :xn):
and (1.6) and (1.8) together yield
j= F ! 8x1 : : : 8xn (G \$ Px1 : : :xn):

## The second application uses Lyndon's version of the interpolation theorem. It is a

theorem about monotone operators. To de ne monotone operators let L be a rst
order language and S an L-structure. Let P be a new unary predicate symbol. An
L(P)-formula F with FV(F) = fxg de nes an operator F : Pow(S) ! Pow(S) by
F (N) = fs 2 S : (S ; N) j= F[s]g
for N  S where (S ; N) is the L(P)-expansion of S interpreting P by N  S: An
operator
: Pow(S) ! Pow(S)
is monotone on S if N  M entails (N)  (M): F is globally monotone if it is
monotone on every L-structure S :
Lemma 1.9.21. Let F be an L(P)-formula with FV(F) = fxg and at most positive
occurrences of P: Then F de nes a globally monotone operator.
1.9. Applications of Gentzen's Hauptsatz 79
Proof. It suces to prove
(1.9) j= 8x1 : : : 8xn(Px1 : : :xn ! Qx1 : : :xn) ! F ! FP (Q)
for P occurring at most positively in F and
(1.10) j= 8x1 : : : 8xn(Px1 : : :xn ! Qx1 : : :xn) ! (FP (Q) ! F)
for P occurring at most negatively. We prove (1.9) and (1.10) simultaneously by
induction on the length of F: If F = Pt1 : : :tn, then P occurs positively and we have
j= 8x1 : : : 8xn(Px1 : : :xn ! Qx1 : : :xn ) ! Pt1 : : :tn ! Qt1 : : :tn:
If F = :G and P occurs positively (negatively) in F, then P occurs negatively (posi-
tively) in G: By induction hypothesis we have
(1.11) j= 8x1 : : : 8xn(Px1 : : :xn ! Qx1 : : :xn) ! (GP (Q) ! G)
or
(1.12) j= 8x1 : : : 8xn(Px1 : : :xn ! Qx1 : : :xn) ! (G ! GP (Q));
respectively. But (1.11) and (1.12) immediately entail (1.9) and (1.10). If F = G _ H
and P occurs positively (negatively) in F, then P occurs at most positively (negatively)
in both G and H: Thus we have by induction hypothesis
(1.13) j= 8x1 : : : 8xn(Px1 : : :xn ! Qx1 : : :xn) ! (G ! GP (Q))
and
(1.14) j= 8x1 : : : 8xn (Px1 : : :xn ! Qx1 : : :xn ) ! (H ! HP (Q));
or we have
(1.15) j= 8x1 : : : 8xn(Px1 : : :xn ! Qx1 : : :xn) ! (GP (Q) ! G)
and
(1.16) j= 8x1 : : : 8xn (Px1 : : :xn ! Qx1 : : :xn ) ! (HP (Q) ! H);
respectively. But (1.13) and (1.14) entail
j= 8x1 : : : 8xn(Px1 : : :xn ! Qx1 : : :xn) ! (G _ H ! (G _ H)P (Q))
and (1.15) and (1.16)
j= 8x1 : : : 8xn (Px1 : : :xn ! Qx1 : : :xn) ! ((G _ H)P (Q) ! G _ H):
If F = 9xG and P occurs positively (negatively) in F, then P occurs positively (neg-
atively) in G and we get
(1.17) j= 8x1 : : : 8xn(Px1 : : :xn ! Qx1 : : :xn) ! (G ! GP (Q))
80 I. Pure Logic
or
(1.18) j= 8x1 : : : 8xn(Px1 : : :xn ! Qx1 : : :xn) ! (GP (Q) ! G)
by induction hypothesis. From (1.17) or (1.18), however, we immediately get
j= 8x1 : : : 8xn (Px1 : : :xn ! Qx1 : : :xn ) ! (9xG ! 9xGP (Q))
and
j= 8x1 : : : 8xn(Px1 : : :xn ! Qx1 : : :xn) ! (9xGP (Q) ! 9xG):

Now we may use Lyndon's interpolation theorem to show that indeed any globally
monotone operator is de nable by a P-positive formula F:
Theorem 1.9.22. A rst order de nable operator is globally monotone i it is de n-
able by some P-positive formula.
Proof. The one direction of the theorem is Lemma 1.9.21. To prove the opposite
direction let F be globally monotone. Then we have
(S ; P; Q) j= 8x1 : : : 8xn(Px1 : : :xn ! Qx1 : : :xn) ! 8x(F ! FP (Q))
for any structure S and any expansion (S ; P; Q) of S : Thus
j= 8x1 : : : 8xn(Px1 : : :xn ! Qx1 : : :xn) ^ F ! FP (Q)
By Lyndon's interpolation theorem we get an interpolation formula E; i.e.
(1.19) j= 8x1 : : : 8xn(Px1 : : :xn ! Qx1 : : :xn) ^ F ! E

(1.20) j= E ! FP (Q):
Since Q only occurs positively in
8x1 : : : 8xn (Px1 : : :xn ! Qx1 : : :xn ) ^ F
it can at most occur positively in E: Thus choosing Q as P in (1.19) and (1.20) we get
j= F \$ EQ (P); i.e. F is logically equivalent to a formula EQ (P) which has at most
positive occurrences of P:

Exercises
E 1.9.1. Let F; G be quanti er free formulas of the rst order language L: Let 0 be a
constant symbol, f and j  j function symbols and < a predicate symbol. Compute a
prenex form of the following formulas:
a) 8xF \$ 9xG
1.9. Applications of Gentzen's Hauptsatz 81
b) 8x(0 < x ! 9y(0 < y ^ 8z(jz x0j < y ! jf(z) f(x0 )j < x))):
E 1.9.2. Let P; Q; R be binary predicate symbols of the Tait-language LT : Compute
with respect to the proof of the interpolation theorem for the Tait-calculus, an inter-
polation formula for
8x(9yPxy ^ 9yQxy) and 8x9y(Pxy _ Rxy)
Hint: First determine a derivation for
8x(9yPxy ^ 9yQxy) ! 8x9y(Pxy _ Rxy)
in the Tait-calculus.
E 1.9.3. Prove that every monotone operator has a least xed point, i.e. if is the
operator the least xed point is a set s with
a) (s) = s
b) 8t( (t) = t ! s  t)
E 1.9.4. Prove or disprove: For every rst order language L containing only nitely
many constant and function symbols there is an n 2 IN such that for every L-formula
9xF with T 9xF there are t1; : : : ; tm ; m  n with
Fx(t1 ) _ : : : _ Fx (tm ):
T
E 1.9.5. Let L(C ; F ; P) be a rst order language with C [ F [ P nite. Let P; Q be
unary predicate symbols not in L; S a nite L-structure and R  S: We call
R is S -invariant , for all isomorphisms ' : S  = S it is R = f'(r) : r 2 Rg
For the notion of an isomorphism cf. De nition0 2.2.6. We denote the expansions of an
L-structure S 0 to L(P) and L(P; Q) by (S 0 ; P S ) and (S 0 ; P S 0 ; QS 0 ): Now, let P S = R
and F an L(P)-formula, such that for all L-structures S 0
(S 0 ; P S 0 ) j= F , (S 0 ; P S 0 ) 
= (S ; P S )
Prove the following claims:
a) R is S -invariant and (S 0 ; P S 0 ) = (S 0 ; P S 0 ) ) P S 0 is S -invariant.
b) R is S -invariant ) (S 0 ; P S 0 ; QS 0 ) j= F ^ FP (Q) ! 8x(Px \$ Qx)
c) R is S -invariant , there is an L-formula G with
FV(G)  fx0g such that R = fs 2 S : S j= G[s]g:
Hint: Use Beth's de nability theorem.
d) For any set X  S there is an L-formula G with FV(G)  fxg such that
f'(s) : s 2 X and ' : S = Sg = fs 2 S : S j= G[s]g:
82 I. Pure Logic
E 1.9.6. Prove Proposition 1.9.17.
E 1.9.7. It is possible to strengthen Herbrand's lemma in the following way:
If T is a theory with
T j= 9xF
then there are nitely many terms t1 ; : : : ; tn such that
T j= Fx (t1) _ : : : _ Fx (tn):
Which restrictions have to be made on T and F that the above strengthing is correct?
E 1.9.8. Do we have in general j= F \$ FH ; i.e. do we have for all LH -structures S
(which are expansions of L-structures) and all S -assignments
S j= F \$ FH []?

## 1.10 First Order Logic with Identity

The equality of objects in a domain of an L-structure is such a basic property that it
should be possible to express it in any logic. Up to now we cannot do that. Therefore
we are going to introduce a symbol `=' into the rst order language L and call the
extended language the `language L with identity', denoted by LI.
De nition 1.10.1. The `standard' interpretation of `=' in any LI-structure S is given
by
=S is the set f(s; s) : s 2 S g:
Since we don't want to repeat all the work we have done up to now for languages
with identity, we try to treat `=' as a common binary predicate constant on such to
transfer the results of the previous sections to languages with identity. In order to give
`=' the special meaning, intended by its standard interpretation, we cannot treat it as
an arbitrary binary predicate constant but have to x its meaning by giving de ning
axioms for it.
De nition 1.10.2. The following sentences are the de ning axioms for `='.
1. 8x(x = x) (re exivity)
2. 8x8y(x = y ! y = x) (symmetry)
3. 8x8y8z(x = y ^ y = z ! x = z) (transitivity)
4. 8x1 : : : 8xn 8y1 : : : 8yn (x1 = y1 ^ : : : ^ xn = yn ! fx1 : : :xn = fy1 : : :yn ) for all
function symbols f in L (compatibility with functions)
5. 8x1 : : : 8xn 8y1 : : : 8yn (x1 = y1 ^ : : : ^ xn = yn ! (Px1 : : :xn ! Py1 : : :yn )) for
all predicate symbols P in L (compatibility with relations).
1.10. First Order Logic with Identity 83
We call the set of de ning axioms for `=' Id. Every axiom of Id is the form 8~xF with
quanti er free F: We de ne
Id0 = fF~x (~t ) : BV(F) = ; ^ 8~xF 2 Idg
and call Id0 the open version of identity axioms. The open version is apparently as
good as Id since we have
Proposition 1.10.3. Id0 j= F entails Id j= F:
Proof. If Id0 j= F, then we have by compactness and the deduction theorem
j= F1 ^ : : : ^ Fn ! F for F1; : : : ; Fn 2 Id0 :
Since j= 8~xFi ! Fi for i = 1; : : : ; n this entails
j= 8~x1F1 ^ : : : ^ 8~xn Fn ! F:
Thus Id j= F by the deduction theorem.
In the rest of this section we are going to investigate the possible di erences between
the two viewpoints:
1. Taking = as a `logical symbol' and interpreting it standardly in any LI-structure
2. Taking = as a `non-logical symbol' (i.e. a symbol belonging to the set P ) and
interpreting it in L-structures which are models of Id:
The rst and easiest observation is made by the following proposition.
Proposition 1.10.4. For any LI-structure S we obviously have S j= Id:
De nition 1.10.5. Let S1 = (S1; C 1 ; F1 ; P1), S2 = (S2 ; C 2 ; F2; P2) be L-structures.
We call S2 epimorphic to S1 if there is a mapping
' : S1 ! S2
satisfying the following conditions:
1. ' is onto
2. '(cS ) = cS for all c 2 C
1 2

1 2

1 2

## Mappings satisfying 1.{4. are called epimorphisms.

In mathematics we meet a lot of epimorphisms. E.g. if S1 ; S2 are groups, i.e. special
LGT -structures, ' : S1 ! S2 is an epimorphism in the sense of De nition 1.10.5 i it is
a group homomorphism and onto. Here we want the reader also to look at De nitions
2.2.1 and 2.2.6.
84 I. Pure Logic
Proposition 1.10.6. Let ' : S1 ! S2 be an epimorphism and  an S1-assignment.
Then there is an S2 -assignment ' such that
1: tS [' ] = '(tS []) for all L-terms t
2 1

## 2: Val' (F; ') = Val' (F; ) for all L-formulas F:

Proof. Put ' (x) = '((x)) and check 1. and 2. by induction on the length of t and
F; respectively, as an easy exercise.
Theorem 1.10.7. Let S be an L-structure satisfying Id: Then there is an LI-structure
S epimorphic to S ; i.e. a structure which interprets `=' standardly.
Proof. Let S = (S; C ; F; P): We de ne S = (S; C ; F; P) as follows:
a = fb 2 S : a =S bg; for a 2 S
S = fa : a 2 S g
cS = cS for x 2 C
f S (s1; : : : ; sn ) = f S (s1 ; : : : ; sn ) for f 2 F
and P S = f(a1 ; : : : ; an) : (a1 ; : : : ; an) 2 P S g for P 2 P :
Now it is easy to show that
1. f S and P S are well de ned.
2. ' : S ! S '(a) = a is an epimorphism.
3. S interprets = standardly
The proofs are straightforward and left as an exercise.
In 1.10.4 we have seen that the standard interpretation of `=' is always a model of
Id: Combining this with Theorem 1.10.7 we see that there is no essential di erence
between our two standpoints mentioned in the beginning of this section. Therefore it
should not be dicult to transfer the results on pure logic to logic with identity. First
we get:
Theorem 1.10.8 (Compactness theorem for logic with identity). Let M be a
set of LI-formulas such that every nite subset of M is LI-consistent. Then M is
LI-consistent.
Proof. Let M0  M be a nite subset of M: Then there is an LI-structure S and an
S -assignment  such that S j= F[] for any F 2 M0 : Since S j= Id we see that M [ Id
is nitely consistent. By the compactness theorem for pure logic we therefore get an
L-structure S 0 and an S 0-assignment 0 with S 0 j= F[0] for all F 2 M and by 1.10.7
and 1.10.6 S 0 and 0 can be boiled down to an LI-structure S~ and an S~-assignment ~
satisfying M:
Let us denote by
M Id F that S j= M[] ) S j= F[]
holds for any LI-structure S and S -assignment : Then we have
1.11. A Tait-Calculus for First Order Logic with Identity 85
Theorem 1.10.9. M Id F i M [ Id j= F:
Proof. We have M Id F i M [f:F g is LI-inconsistent, i.e. there is no LI-structure
and no S -assignment  satisfying the formulas in M [ f:F g: But then M [ Id [ f:F g
is L-inconsistent because any L-structure S and any S -assignment  satisfying S j=
M [ Id [f:F g[] can be boiled down by 1.10.7 to an LI-structure and a corresponding
assignment without changing the truth values of the formulas.
On the other hand, if M [ f:F g is LI-consistent, M [ Id [ f:F g is L-consistent
by 1.10.4.
Theorems 1.10.8 and 1.10.9 tell us that there is no essential di erence between the two
standpoints: regarding `=' as a logical or an additional `non-logical' symbol. Therefore
we are going to count `=' among the logical symbols. Since we have the compactness
theorem for this language all `model theoretic' properties are transferred. By Theorem
1.10.9, however, we see that there is also a calculus producing the valid formulas of rst
order logic with identity. We have Id F i Id j= F and all we have to do is to use
a calculus for pure rst order logic, e.g. the Hilbert-calculus introduced in section 1.7
and augment its axioms by the set Id: A bit more delicate to transfer are the results
obtained by inspecting the Tait-calculus. A priori we cannot be sure that there is also
a cut free calculus for rst order logic with identity. We are going to check this in the
next section.
Exercises
E 1.10.1.
a) Prove:
Id j= s1 = t1 ^ : : : ^ sn = tn ! tx ;::: ;xn (s1 ; : : : ; sn ) = tx ;::: ;xn (t1 ; : : : ; tn):
1 1

b) Prove:
Id j= s1 = t1 ^ : : : ^ sn = tn ! Fx ;::: ;xn (s1 ; : : : ; sn ) = Fx ;::: ;xn (t1 ; : : : ; tn):
1 1

## E 1.10.2. Prove Proposition 1.10.6.

E 1.10.3. Prove Theorem 1.10.7.
1.11 A Tait-Calculus for First Order Logic with Identity
Since we have Id F ) Id j= F by 1.10.9 it is obvious that for any formula F with
Id F we get
T F1 ; : : : ; Fn; F
for nitely many instances F1; : : : ; Fn of axioms in Id: Thus adding axioms T ; G
for G 2 Id and the cut rule would immediately give us T F: All the results we got
86 I. Pure Logic
by the Tait-calculus, however, depended heavily on the fact that the calculus was cut
free. Therefore we are going to try to cook up a cut free calculus also for predicate
logic with identity. In the Tait-language we also have the symbol 6= for inequality.
De nition 1.11.1. We de ne calculus Id  as follows. We augment the axiom of
the Tait-calculus by the additional axiom
Id ; t = t for any L-term t (Id-axiom)
and the rules by the identity rule
; si = ti for i = 1; : : : ; n and Id ; Px ;::: ;xn (t1 ; : : : ; tn) imply
Id 1

## Id ; Px ;::: ;xn (s1 ; : : : ; sn):

1

where P denotes an atomic formula of the Tait-language for LI (Id-rule). The other
axioms and rules are those of the ordinary Tait-calculus (cf. De nition 1.8.4).
Proposition 1.11.2. If Id , then : is LI-inconsistent.
Proof. The proof is essentially that of 1.8.6 with the additional clauses that we have
Id  by an Id-axiom or an Id-rule.
In the case of an Id-axiom we have (t = t) 2 : Hence (t 6= t) 2 : and :
is LI-inconsistent. In case of an Id-rule we have by induction hypothesis the LI-
inconsistency of
: [ fs1 6= t1; : : : ; sn 6= tng
and
: [ fP x ;::: ;xn (t1; : : : ; tn)g:
1

## Let S be an LI-structure and  an S -assignment. We have to show that

S 6j= (: [ fP x ;::: ;xn (s1 ; : : : ; sn )g)[]:
1

## If S j= :[], then S j= (si = ti)[] for i = 1; : : : ; n and

S j= Px ;::: ;xn (t1; : : : ; tn)[]:
1

## This immediately entails

S j= Px ;::: ;xn (s1 ; : : : ; sn)
1

since S is an LI-structure.
Corollary 1.11.3. Id F entails Id F and Id j= F:
Proof. By 1.11.2 we rst get Id F and this by 1.10.9 entails Id j= F:
Our next aim is the proof of the opposite direction of 1.11.3. To prepare this we need
the following lemma.
1.11. A Tait-Calculus for First Order Logic with Identity 87
Lemma 1.11.4. Assume Id ; P and Id ; P for an atomic formula P. Then
Id ; :
Proof. Without loss of generality we may assume that P is not an equation. We
show the lemma by induction on the length of the derivation Id ; P: If P 2 ; 
we get Id ;  by the structural rule and are done. Thus assume P 2= ; : Then
Id ; P cannot be an L-axiom. If it is an Id-axiom, then (t = t) 2  for some
L-term t because P is not an equation. Then Id ; is an Id-axiom, too. If the
last inference is
Id i ; P ) Id ; P for i = 0 or i = 0; 1
then we get Id i ; by induction hypothesis and obtain Id ; by the same
inference. Thus the only non-trivial case is that the main formula of the last inference
is P: Since P is atomic this can only by an inference according to the Id-rule. But
then P = Px ;::: ;xn (s1 ; : : : ; sn) and we have the premises
1

(1.21) Id ; s1 = t1 ; : : : ; Id ; sn = tn
and
(1.22) Id ; Px ;::: ;xn (t1 ; : : : ; tn):
1

## We have the Id-axioms

(1.23) Id ; t1 = t1 ; : : : ; Id ; tn = tn
and obtain from (1.21) and (1.23)
(1.24) Id ; t1 = s1 ; : : : ; Id ; tn = sn
by applications of the Id-rule. From (1.24) and the hypothesis
Id ; Px ;::: ;xn (s1 ; : : : ; sn )
1

we get
(1.25) Id ; Px ;::: ;xn (t1 ; : : : ; tn)
1

by another application of the Id-rule. But (1.25) together with (1.22) yield
Id ;
by the induction hypothesis.
For the next lemma we introduce the set
Idt = f8xk+1 : : : 8xn Fx ;::: ;xk (t1 ; : : :tk ) :
1

## F 2 Id0 and t1 ; : : : ; tk are arbitrary L-termsg:

88 I. Pure Logic
Lemma 1.11.5. Assume  Idt and T : ; : Then Id :
Proof. We induct on the length of the derivation of T : ; : If T : ;  is an
L-axiom, then either already  is an L-axiom or  contains an equation t = t for
some L-term t such that (t =6 t) 2 : (these are the only atomic formulas occurring
in  ). In the rst case we get Id  as an L-axiom in the second as an Id-axiom.
If the last inference is
T : ; i ) T : ;  for i = 0 or i 2 f0; 1g;
then we get Id i by induction hypothesis and deduce Id  by the same inference,
which is possible because the Tait-calculus with identity comprises pure Tait-calculus.
Thus the crucial cases are those in which the main formula F of the last inference
belongs to the set : : There we have the following sub-cases.
1. F = 8xG
Then we have the premise ` : ; ; Gx(t): But if 8xG 2 Idt we also have
Gx(t) 2 Idt and obtain Id  by the induction hypothesis.
2. F = (s = t ! t = s)
Then it is :F = (s = t ^ t 6= s) and we have the premises
T; ; s = t and T : ; ; t 6= s:
By the induction hypothesis we get
Id ; s = t and Id ; t 6= s:
From these we obtain
; s 6= s
Id
by an application of the Id-rule. Using an Id-axiom Id ; s = s we get
Id :
3. F = (s = t ^ t = r ! s = r)
Then F = (s = t ^ t = r) ^ s 6= r: Then we have the premises
T : ; ; s = t ^ t = r and T : ; ; s 6= r
which by ^-inversion (of the exercises) entail that there are derivations
T : ; ; s = t and T : ; ; t = r
which are not longer than that of
T : ; ; s = t ^ t = r:
1.11. A Tait-Calculus for First Order Logic with Identity 89
By induction hypothesis these yield
(1.26) Id ; s = t

(1.27) Id ; t = r
and
(1.28) Id ; s 6= r
From (1.26) and (1.27) we get Id ; s = r by the Id-rule. This together with
(1.28) entails Id  by Lemma 1.11.4.
4. F = (s1 = t1 ^ : : : ^ sn = tn ! fs1 : : :sn = ft1 : : :tn )
Then we have the premises
(1.29) T : ; ; s1 = t1 ; : : : ; T : ; ; sn = tn
and
(1.30) T : ; ; fs1 : : :sn 6= ft1 : : :tn
(where we tacitly use ^-inversion to get (1.29)). By the induction hypothesis
these yield
(1.31) Id ; s1 = t1 ; : : : ; Id ; sn = tn
and
(1.32) Id ; fs1 : : :sn 6= ft1 : : :tn:
From (1.29) and the Id-axiom
(1.33) Id ; ft1 : : :tn = ft1 : : :tn
we get
(1.34) Id ; fs1 : : :sn = ft1 : : :tn:
From (1.30) and (1.34) we get
(1.35) Id 
by Lemma 1.11.4.
90 I. Pure Logic
5. F = (s1 = t1 ^ : : : ^ sn = tn ! (Ps1 : : :sn ! Pt1 : : :tn))
i.e.
F = (s1 = t1 ^ : : : ^ sn = tn ^ Ps1 : : :sn ^ Pt1 : : :tn ):
Then we have the premises (again using ^-inversion)
(1.36) T : ; ; s1 = t1 ; : : : ; T : ; ; sn = tn

## (1.37) T : ; ; Ps1 : : :sn

and
(1.38) T : ; ; Pt1 : : :tn :
We apply the induction hypothesis to (1.36), (1.37) and (1.38) and get by an
application of the Id-rule
Id ; Ps1 : : :sn and Id ; Ps1 : : :sn
which again by Lemma 1.11.4 entails Id :
Theorem 1.11.6. We have Id F i Id F:
Proof. The direction from right to left is Corollary 1.11.3. Thus assume Id F: Then
by 1.10.9 we have Id j= F which by the compactness and deduction theorem entails
j= F1 ^ : : : ^ Fn ! F for formulas F1; : : : ; Fn 2 Id  Idt : By the completeness theorem
for the Tait-calculus this yields T F1 ; : : : ; Fn; F and by 1.11.5 we nally get
Id F:
Lemma 1.11.7 (Herbrand's lemma for logic with identity). If F is an 9-for-
mula with Id 9xF , then there are nitely many L-terms t1; : : : ; tn such that
Id Fx (t1 ) _ : : : _ Fx (tn ):
Proof. Assume Id 9xF: Then
T F1; : : : ; Fn; 9xF
for fF1; : : : ; Fng  Id by 1.10.9, the deduction theorem and the completeness theorem
for the Tait-calculus. All formulas in Id are 8-formulas. Hence F1; : : : ; Fn are
9-formulas and we may apply Lemma 1.9.2 to get nitely many terms t1 ; : : : ; tn such
that
T F1; : : : ; Fn; Fx(t1 ); : : : ; Fx(tn):
This together with Lemma 1.11.5 entails Id Fx (t1 ) _ : : : _ Fx(tn ):
Lemma 1.11.7 entails Herbrand's theorem for predicate logic with identity in the same
way as Lemma 1.9.2 did it in the case of pure logic. All the proofs, including the
construction of the prenex form of a formula, can be literally transferred.
1.11. A Tait-Calculus for First Order Logic with Identity 91
Theorem 1.11.8 (Herbrand's theorem for logic with identity). If F is an L-
sentence and FH = 9x1 : : : 9xk G: Then we have Id F i there are nitely many
k-tuples
(t11 ; : : : ; t1k ); : : : ; (tn1 ; : : : ; tnk)
of LH -terms such that
Id Gx ;::: ;xk (t11 ; : : : ; t1k ) _ : : : _ Gx ;::: ;xk (tn1 ; : : : ; tnk):
1 1

Another application of Lemma 1.11.5 is the interpolation theorem for predicate logic
with identity. In a language with identity we can even avoid the addition of the
empty formula, which was needed to state the interpolation theorem for pure logic in
a general setting (cf. Theorem 1.9.13). We observe that the formula t = t for closed
terms t behaves like > and dually t 6= t like ?: We have
(1.39) Id ; t = t
for any formula set  by an Id-axiom and
; t 6= t ) Id 
Id
by (1.1) and Lemma 1.11.4. Thus we get the general interpolation theorem for the
calculus Id . Of course, no longer we can count `=' among the non-logical symbols,
because t 6= t, in the role of the empty formula, must not contain a non-logical predicate
symbol.
Theorem 1.11.9 (Interpolation theorem for logic with identity). If we have
Id F ! G, then there is an interpolation formula E for F and G, i.e. we have
Id F ! E and Id E ! G and E contains at most those non-logical predicate con-
stants positively (negatively) which occur simultaneously positively (negatively) in F
and G:
Proof. By the above remark and Theorem 1.11.6 it is just the same as the proof of
1.9.14 by a slight modi cation of the proof of Theorem 1.9.9 for the calculus Id in
the axiom case and for the Id-rule.
We will now leave these more syntactical investigations and turn to the fundamentals
of model theory.
92 I. Pure Logic
Chapter 2
Fundamentals of Model Theory

The objects of interest in model theory are structures and classes of structures. Here
connections to rst order logic will be studied. Especially it is analysed if and how
structures and classes of structures can be described within rst order logic.
In this chapter we will only deal with predicate logic with identity. Therefore there
will be no need to emphasise that. So we are just writing j= instead of Id etc.

## 2.1 Conservative Extensions and Extensions by De nitions

De nition 2.1.1. Let L be a rst order language. An L-theory is a set of L-sentences.
L-theories are denoted by T; T0 ; : : : If F is an L-formula and T a theory such that
T j= F, then F is called provable in T: We call the sentences provable in T theorems
of T: The language of a theory T is L(T ) = L(CT ; FT ; PT ) where CT ; FT and PT are
the sets of constant, function and predicate symbols which occur in sentences of T . A
structure S which satis es all sentences of T is called a T -model.
Here we give our standard example. Let LGT be the language of group theory (cf. sec-
tion 1.1). Then AxGT { the set of the following formulas
8x8y8z(x  (y  z) = (x  y)  z)
8x(x  1 = x)
8x9y(x  y = 1)
is an LGT -theory. An AxGT -model is just a group. Because in every group G we have
G j= 8x(1  x = x) it is
8x(1  x = x)
a theorem of AxGT . As there are also non-commutative groups
8x8y(x  y = y  x)
is not a theorem of AxGT :
94 II. Model Theory
De nition 2.1.2. Assume that L  L0 are rst order languages.
a) An L0-theory T 0 is an extension of an L-theory T if every T-theorem also is an
T 0-theorem, i.e.
T j= F ) T 0 j= F:
b) An extension T 0 of T is conservative, if for all L-sentences we also have
T 0 j= F ) T j= F:
Now think of the language LGT  LGT of section 1.5. In LGT we have an unary
function symbol 1 for the inverse function. That is we set
AxGT = AxGT [ f8x(x  x 1 = 1)g:
Our intuitive understanding of groups may say that AxGT is a conservative extension
of AxGT , i.e. AxGT doesn't prove more LGT -sentences than AxGT does. That this
impression is correct can be obtained by the following result.
Theorem 2.1.3. Assume that T and T 0 are theories such that every T-model expands
to a T 0 -model. Then T 0 is conservative over T:
Proof. Let L = L(T ) and L0 = L(T 0): Let F be an L-sentence such that T 0 j= F: If S
is a T model expand it to a T 0 model S 0. Then S 0 j= F: But S is the L-retract of S 0
and F is in L: Thus S j= F: Hence T j= F:
We had already an example of a conservative extension which was LI and L: Since
any L-structure expands to an LI-structure satisfying Id we re-obtain the result that
predicate logic with identity is a conservative extension of pure logic.
The above example of the inverse function in group theory gives reason for the
following question: In mathematics it is usual to de ne new functions and relations and
prove theorems using those de nitions. Usually we assume that there is no di erence if
we use those new de nitions or not. By the following de nition we make precise what
we usually do in mathematics and the next theorem justi es mathematical practice.
De nition 2.1.4. Let T be a theory with language L = L(T): A language L  L0 =
L(C 0 ; F 0; P 0) is called an extension of T by de nitions if the following conditions are
satis ed:
1. For every c 2 C 0 n C there is an L(T)-formula F c such that
FV(F c ) = fxg and T j= 9x(F c ^ (8y(Fxc (y) ! y = x)));
brie y denoted by T j= 9!xF c(x):
2. For every f 2 F 0 n F with #f = n there is an L(T)-formula F f such that
FV(F f ) = fx1; : : : ; xn; yg and T j= 8x1 : : : 8xn9!yF f :
3. For every P 2 P 0 n P with #P = n there is an L(T)-formula F P such that
FV(F P ) = fx1; : : : ; xng:
2.1. Conservative Extensions and Extensions by De nitions 95
If L0 is an extension by de nitions of a theory T, then T (L0 ) is the theory which
contains the following sentences:
1. all sentences in T; i.e. T  T(L0)
2. all sentences Fxc(c) for c 2 C 0 n C
3. all sentences 8x1 : : : 8xnFyf (fx1 : : :xn) for f 2 F 0 n F
4. all sentences 8x1 : : : 8xn(Px1 : : :xn \$ F P ) for P 2 P 0 n P :
We have seen LGT to be an extension of AxGT by de nition.
Theorem 2.1.5. If L0 is an extension of T by de nitions, then T(L0) is conservative
over T:
Proof. Let S = (S; C ; F; P) be a T-model. We have to expand S to a T(L0 )-model S 0 .
To obtain an L0-structure we rst have to interpret the symbols in L0 which do not
belong to L:
1. If c 2 C 0 n C , then there is a formula F c such that T j= 9!xF c: Because of S j= T
we also have S j= 9!xF c which entails that 0there is an uniquely determined
element s 2 S such that S j= Fxc[s]: We put cS = s: Then we obviously get
(2.1) S 0 j= Fxc(c):
2. Let f 2 F 0 n F and #f = n: Then there is an L(T)-formula F f such that
(2.2) S j= 8x1 : : : 8xn9!yF f :
We de ne
f S 0 (s1 ; : : : ; sn ) = t if S j= F f [s1 ; : : : ; sn ; t]:
This de nes a function because for arbitrary s1 ; : : : ; sn 2 S there is exactly one
t such that S j= F f [s1 ; : : : ; sn; t] by (2.2). Then we obviously have
(2.3) S 0 j= 8x1 : : : 8xnFyf (fx1 : : :xn):
3. For P 2 P 0 n P with #P = n we have an L(T)-formula F P such that FV(F P ) =
fx1; : : : ; xng: De ne
P S 0 = f(s1 ; : : : ; sn ) 2 S n : S j= F P [s1 ; : : : ; sn ]g
Then we get
(2.4) S 0 j= 8x1 : : : 8xn(Px1 : : :xn \$ F P ):
From (2.1), (2.3) and (2.4) we get that S 0 is a T(L0)-model. Thus by 2.1.3 T(L0) is a
conservative extension of T:
Now we can strengthen Theorem 2.1.5 in such a way that we have for every formula
of the extension by de nitions L0 of a theory T an equivalent L(T)-formula. I.e. it is
possible to replace the new de ned symbols by their de nitions.
96 II. Model Theory
Theorem 2.1.6. Let L0 be an extension of T by de nitions. Then for each L0-formula
F there is an L(T )-formula F T such that
T(L0) j= F \$ F T :
Proof. In a rst step we prove: For any L0-term t there is an L(T)-formula Gt such
that
(2.5) T(L0 ) j= t = x \$ Gt for x 2= FV(t)
by induction on the length of t: If t is an L-term we put Gt = (t = x): If t = c 2 C 0 n C ,
then there is an L-formula F c such that T j= 9!xF c and T(L0) j= Fxc(c): We put
Gc = F c: Then if c = x we get Gc from Fxc (c): On the other hand if Gc we get c = x
from T(L) j= 9!xGc and T (L0 ) j= Gcx (c): Hence
T(L0) j= c = x \$ Gc :
If t = fs1 : : :sn , then there are formulas Gsi such that
(2.6) T(L0 ) j= si = x \$ Gsi
for i 2 f1; : : : ; ng: If f 2= F , then there is an L(T)-formula F f such that FV(F f ) =
fx1; : : : ; xn; yg
T(L0) j= 8x1 : : : 8xn9!yF f
and
(2.7) T(L0) j= 8x1 : : : 8xn Fyf (fx1 : : :xn):
We de ne
Gt = 9x1 : : : 9xn (Gsx (x1) ^ : : : ^ Gsxn (xn) ^ Fyf (x)):
1

Then we have
(2.8) T(L0 ) j= fs1 : : :sn = x \$ Gt:
To show (2.8) we observe that by (2.6) we have
(2.9) T(L0 ) j= 9x1 : : : 9xn (Gsx (x1) ^ : : : ^ Gsxn (xn)):
1

## On the other hand we also have

(2.10) T(L0) j= fs1 : : :sn = x ^ x1 = s1 ^ : : : ^ xn = sn ! fx1 : : :xn = x:
From (2.6), (2.7) and (2.10) we therefore get
T(L0) j= fs1 : : :sn = x ^ 9x1 : : : 9xn (Gsx (x1 ) ^ : : : ^ Gsxn (xn ))
1

## ! 9x1 : : : 9xn(Gsx (x1) ^ : : : ^ Gsxn (xn) ^ Fyf (x))

1
2.1. Conservative Extensions and Extensions by De nitions 97
which together with (2.9) yields
T(L0 ) j= fs1 : : :sn = x ! Gt:
For the opposite direction we observe that by (2.5) and (2.6) we have
T(L0) j= Fyf (x) ! fx1 : : :xn = x:
Thus
T (L0 ) j= 9x1 : : : 9xn (s1 = x1 ^ : : : ^ sn = xn ^ Fyf (x) ! fs1 : : :sn = x)
which together with (2.10) entails
T(L0 ) j= Gt ! fs1 : : :sn = x:
If f 2 F we put
Gt = 9x1 : : : 9xn(Gsx (x1) ^ : : : ^ Gsxn (xn) ^ fx1 : : :xn = x)
1

and show T (L0) j= fs1 : : :sn = x \$ Gt as above. This terminates the proof of (2.5).
Next we show: For an atomic formula Pt1 : : :tn there is an L(T)-formula G such
that
(2.11) T(L0) j= Pt1 : : :tn \$ G:
If P 2= P , then there is an L(T)-formula F P (x1 ; : : : ; xn) with FV(F P ) = fx1; : : : ; xng
(2.12) T(L0) j= 8x1 : : : 8xn(Px1 : : :xn \$ F P )
We put
G = 9x1 : : : 9xn(Gtx (x1) ^ : : : ^ Gtxn (xn) ^ F P )
1

## and if P 2 P we may just put F P = Px1 : : :xn; i.e.

G = 9x1 : : : 9xn (Gtx (x1 ) ^ : : : ^ Gtxn (xn ) ^ Px1 : : :xn)
1

## where Gti are the formulas given by (2.7). Then we get

T(L0) j= Pt1 : : :tn \$ G
since we have
T (L0 ) j= 9x1 : : : 9xn (Gtx (x1 ) ^ : : : ^ Gtxn (xn ))
1

and
T(L0) j= Pt1 : : :tn ^ 9x1 : : : 9xn(Gtx (x1) ^ : : : ^ Gtxn (xn))
1

1

## by (2.6) and (2.12)

T(L0) j= Pt1 : : :tn ! G
98 II. Model Theory
and the opposite direction follows since also
T (L0) j= 9x1 : : : 9xn(Gtx (x1) ^ : : : ^ Gtxn (xn) ^ F P ) ! Pt1 : : :tn
1

by (2.12) and (2.6). From (2.11), however, we get: For any L0 -formula F there is an
L(T)-formula F T such that
(2.13) T(L0 ) j= F \$ F T
de ning F T inductively by the clauses
1. (Pt1 : : :tn )T = G where G is as in (2.11)
2. (F ^ G)T = F T ^ GT
3. (:F)T = :(F T )
4. (9xF )T = 9x(F T ):
Now an easy induction on the length of F shows (2.13).
Corollary 2.1.7. Let L0 be an extension of T by de nitions. For every L0-formula F
there is an L(T)-formula F T such that T(L0) j= F i T j= F T :
Proof. Take F T as in 2.1.6. Then we have
(2.14) T(L0) j= F \$ F T :
Thus if T(L0) j= F, then T(L0 ) j= F T which entails T j= F T by 2.1.5 since F T is an
L(T)-formula. On the other hand if T j= F T , then of course T(L0 ) j= F T which entails
T(L0) j= F by (2.14).
Exercises
E 2.1.1. Let L1 be an extension by de nitions of T and L2 an extension by de nitions
of T(L1 ). Prove that L2 is an extension by de nitions of T.
E 2.1.2. Let T be a consistent L-theory and  a set of sentences with
F1 ; : : : ; Fn 2  ) F1 _ : : : _ Fn 2 
Show the equivalence of:
1. T has an axiom system  , i.e. T j= and j= T:
2. For all L-structures S ; S 0
S j= T and 8F 2 (S j= F ) S 0 j= F) implies S 0 j= T:
Hint: For the interesting direction let = fF 2  : T j= F g: To prove j= T
de ne for any arbitrary S 0 j= T the set
 = f:F : S 0 j= :F and F 2 g:
Show the consistency of T [  and derive the premis of 2.
2.2. Completeness and Categoricity 99
2.2 Completeness and Categoricity
It is a natural question to ask, whether there is a set of sentences, i.e. an axiom system
which characterises the theorems of a given L-structure S : The obvious answer is of
course `yes'. Just take
Th(S ) = fF : F is L-sentence and S j= F g:
But of course this is not what we really meant. Our question was whether there is some
`simple set' of sentences, whatever `simple' could mean. A possible `simple' set would
be a nite set of sentences or at least a set of sentences together with an algorithm
which allows us to decide whether a sentence belongs to the set or not. Before we
make more precise what a `simple' set of sentences really could mean, we shall study
some general properties of axiom systems. But rst we take a look at the connections
between two given structures.
De nition 2.2.1. Let S1; S2 be two L-structures.
a) We call ' : S1 ! S2 an embedding, written as
' : S1 ,! S2 ;
if the following conditions are satis ed:
1. ' is one-one.
2. '(cS ) = cS for all c 2 C :
1 2

1 2

## 4. (s1 ; : : : ; sn ) 2 P S , ('(s1 ); : : : ; '(sn)) 2 P S for all P 2 P and for all

1 2

s1 ; : : : ; sn 2 S1 :
b) We call ' : S1 ! S2 an elementary embedding, written as
' : S1  S2 ;
if we have
1. ' : S1 ,! S2
2. For any L-formula F and any S1 -assignment  we have
S1 j= F[] , S2 j= F['];
where ' is the S -assignment '  :
c) We call S1 a substructure of S2 , written as S1  S2; if the identity is an embed-
ding, i.e. idS : S1 ,! S2 :
1

## d) S1 is an elementary substructure of S2 , or synonymously S2 is an elementary

extension of S1 , written as S1  S2, if the identity is an elementary embedding,
i.e. idS : S1  S2:
1
100 II. Model Theory
This de nition denotes some possible relationships between two given structures. That
this relation can be described in terms of validity of certain sets of formulas will be
established in the following lemma. Recall (cf. section 1.3) that for an L-structure S
we introduced the language LS which contains constant symbols s for every element
s 2 S the domain of S : By SS we denoted the LS -expansion of L interpreting each
constant s by s. We showed in 1.3.6 and 1.3.7: tSS [s] = tx (s)SS for closed LS -terms
tx (s) and
SS j= F[s] i SS j= Fx (s)
SS j= 8xF i SS j= Fx (s) for all s 2 S
and SS j= 9xF i SS j= Fx (s) for some s 2 S:
for LS -sentences Fx(s):
De nition 2.2.2.
a) The diagram of an L-structure is the set
Diag(S ) = fF : F is an atomic or
negated atomic LS -sentence and SS j= F g:
b) The elementary diagram of an L-structure S is the set
Th(SS ) = fF : F is an LS -sentence and SS j= F g:
Proposition 2.2.3. Let S1; S2 be two L-structures and ' : S1 ! S2 : Then we de ne
the LS -expansion S 0 of S2 by
1

sS 0 = '(s):
a) If we have ' : S1 ,! S2 , then it is for all LS -terms t and all S1S -assignments
1 1

1 1

## b) If it is ' : S1 ,! S2 , then we have S 0 j= Diag(S1 ):

c) If it is ' : S1  S2 , then we have S 0 j= Th(S1S ): 1

## Proof. This is left as an exercise to the reader.

Proposition 2.2.4. Let S1; S2 be two L-structures and S 0 an LS -expansion of S2 :
Then de ne ' : S1 ! S2 by
1

0
'(s) = sS :
a) If it is S 0 j= Diag(S1 ), then we have ' : S1 ,! S2 :
b) If it is S 0 j= Th(S1S ), then we have ' : S1  S2:
1
2.2. Completeness and Categoricity 101
Proof. We prove only the rst part. First we have to show that ' is one-one. If we
have r; s 2 S1 with r 6= s, then it is
S1 j= r 6= s:
Since S 0 j= Diag(S1) we know
'(r) = rS 0 6= sS 0 = '(s);
and so ' is one-one. Now let c 2 C : Then we have
S1S j= c = cS :
1
1

## Here cS is the name for the interpretation of c in S1 . Since S 0 j= Diag(S1 ) we obtain

1

0 0
'(cS ) = (cS )S = cS = cS :
1 1 2

## If we take f 2 F and s1 ; : : : ; sn 2 S1 we have

S1S j= fs1 : : :sn = f S (s1 ; : : : ; sn ):
1
1

## Because of S 0 j= Diag(S1 ) we have

'(f S 0 (s1 ; : : : ; sn )) = (f S 0 (s1 ; : : : ; sn ))S 0
0
= (fs1 : : :sn )S
= f S ('(s1 ); : : : ; '(sn )):
2

## If we take P 2 P , then it is since S 0 j= Diag(S1 )

S1S j= Ps1 : : :sn i S 0 j= Ps1 : : :sn :
1

## But this yields

0
(s1 ; : : : ; sn) 2 P S i ('(s1 ); : : : ; '(sn )) 2 P S = P S :
1 2

## Corollary 2.2.5. Let S1; S2 be two L-structures with S1  S2 :

a) It is S1  S2 i S2S j= Diag(S1 ):
1

b) It is S1  S2 i S2S j= Th(S1S ):
1 1

## De nition 2.2.6. Let S1; S2 be two L-structures.

a) We call S1 and S2 elementary equivalent, denoted by S1  S2 , if
Th(S1 ) = Th(S2 ):
b) We call S1 and S2 isomorphic, written as S1  = S2 , if there is a one-one function
' from S1 onto S2 such that
' : S1 ,! S2
and
' 1 : S2 ,! S1 :
102 II. Model Theory
De nition 2.2.7.
a) An axiom system for an L-structure S is a set Ax  Th(S ):
b) An axiom system Ax is complete for S if Con(Ax) = Th(S ); where
Con(Ax) = fF : F is L-sentence and Ax j= F g
is the set of logical consequences of Ax:
c) A theory T is complete if T is a complete axiom system for some L-structure S :
d) The model class ModL (T) of an L-theory T is de ned as
ModL (T) = fS : S j= T g:
e) An axiom system Ax for S is categorical for S if every structure in ModL (Ax)
is isomorphic to S :
f) A theory T is (-) categorical (for some cardinal ) if any two models of T (of
cardinality ) are isomorphic.
Proposition 2.2.8. Let Ax be categorical for S : Then Ax is also complete for S :
Proof. Con(Ax)  Th(S ) follows from Ax  Th(S ): If F 2 Th(S ) and S 0 j= Ax,
then S 0 is isomorphic to S : Hence F 2 Th(S ) = Th(S 0 ), i.e. S 0 j= F; and we have
Ax j= F:
But there are only uninteresting categorical axioms systems (cf. Exercise E 2.2.10), so
we will not mention them further.
We are able to give some characterisations of complete theories:
Lemma 2.2.9. Let T be an L-theory. Then the following statements are equivalent:
1: T is complete.
2: Con(T ) is maximal and consistent.
3: T is consistent and Con(T) = Th(S ) for all S 2 ModL (T):
4: T is consistent and for any two S ; S 0 2 ModL (T) we have S  S 0 :
Proof.
1: ) 2: is obvious since Con(T) = Th(S ) for some L-structure S and Th(S ) is consis-
tent and maximal.
2: ) 3: Let S 2 ModL (T): Then T  Th(S ): If F 2 Th(S ), then F 2 Con(T) because
F 2= Con(T) entails :F 2 Con(T)  Th(S ), a contradiction.
3: ) 4: is obvious since Th(S ) = Th(S 0 ) = Con(T ) for S ; S 0 2 ModL (T):
3: ) 1: holds trivially.
2.2. Completeness and Categoricity 103
4: ) 3: ModL (T) 6= ; since T is consistent. Pick S 2 ModL (T): Then T  Th(S ): If
F 2 Th(S ) but F 2= Con(T), then T [ f:F g is consistent and thus has a model
S 0 which also belongs to ModL (T): Thus S 0  S which contradicts the fact that
:F 2 Th(S 0 ) and F 2 Th(S ):

## For an L-theory T we de ne the cardinality of T by

card(T) = card(LT );
i.e. by the cardinality of the set of non-logical symbols occurring in T:
Lemma 2.2.10. Let T be an L-theory.
a) If T possesses arbitrary large nite models, then T also has an in nite model.
b) If T has an in nite model, then T has a model of cardinality  for any cardinal
  max(card(T); @0 ):
Proof.
a) Put
T 0 = T [ f8x0 : : : 8xn9y(x0 6= y ^ : : : ^ xn 6= y) : n 2 INg:
Then T 0 is nitely consistent since T possesses arbitrarily large nite models. By
compactness T 0 is consistent. A model of T 0 of course cannot be nite.
b) Let  be a cardinal  max(card(T ); @0) and choose a set fc : < g of new
constant symbols. Put
T 0 = T [ fc 6= c : ; < ; 6= g:
Any nite subset T0 of T 0 contains only nitely many of the new constant symbols
c : Since T possesses an in nite model S0 we may expand it to an T0 -model S1 by
interpreting fc ; : : : ; c n g occurring in T0 such that cS i 6= cS j holds for i 6= j :
1 1

1

## card(S) = max(@0 ; card(T 0 )) = max(@0 ; card(T); ) = :

Now we have to boil down the model S to a structure interpreting equality
standardly. Using Theorem 1.10.7 we obtain an epimorphic model S of T 0. Here
we have
card(S)  
since we have taken equivalence classes. But since
T 0 j= c 6= c
for 6= we have the new constant symbols in di erent equivalence classes. So
we conclude
card(S) = :
104 II. Model Theory
Since there is an in nite group and card(AxGT ) = @0 we have by Lemma 2.2.10 groups
of any in nite cardinality.
The following theorem is going back to Leopold Lowenheim [ 1878, y1957] in
1915 and Thoralf Skolem [ 1887, y1963] in 1920.
Theorem 2.2.11 (Lowenheim-Skolem downwards). Let S be an L-structure and
card(S) =   @0 : For any in nite  with card(L)     and any S0  S with
card(S0 )   there is an elementary substructure S 0  S such that S0  S 0 and
card(S 0 ) = :
Proof. Let L = L(C ; F ; P ): Take S0 with S0  S0  S with card(S0 ) =  and put
S1 = S 0 [ fcS : c 2 Cg:
Then card(S1 ) =  since  is in nite and card(S 0 ) =  and
cardfcS : c 2 Cg  card(L)  :
The idea of the proof is to take the closure of S1 under all functions f S for f 2 F and
to add all witnesses for existential sentences valid in S : Formally this is done by the
following de nitions: For M  S we put
M  = M [ ff S (s1 ; : : : ; s#f ) : f 2 F and (s1 ; : : : ; s#f ) 2 M #f g:
For a formula F in L with FV(F) = fx0; : : :xng and (s1 ; : : : ; sn) 2 M n we de ne
S~s;F = fs 2 S : S j= F[s; s1; : : : ; sn]g:
If S~s;F is not empty, let S(~s ; F) be a xed element of S~s;F : Put
M = M [ fS(~s; F) : ~s 2 M n; F an L-formula and S~s;F 6= ;g:
From card(L)   we obtain that card(M)   entails card(M  )   as well as
card(M)  : We de ne S1 as above and
Sn+1 = Sn
Then we have card(Sn )   for all n 2 IN and for
[
S0 = Sn
n2IN
we have card(S 0 )  : Because of S01  S 0 and card(S1 ) =  it is in fact card(S 0 ) = .
Let S 0 = (S 0 ; C S 0 ; FS 0 ; PS 0) with cS = cS for c 2 C
f S 0 (s1 ; : : : ; sn) = f S (s1 ; : : : ; sn )
for f 2 F and
P S 0 = P S \ S 0 #P for P 2 P :
2.2. Completeness and Categoricity 105
It is
(2.15) S0  S
0
(2.16) f S : (S 0 )#f ! S 0
(2.17) SS 0 j= Th(S 0 S 0 )
We have Sn  S by construction. Hence S 0  S: To show (2.16) assume #f = k and
pick s1 ; : : : ; sk 2 S 0 : Then there is an n 2 IN such that s1 ; : : :sk 2 Sn : Thus
0
f S (s1 ; : : : ; sk ) = f S (s1 ; : : : ; sk ) 2 Sn  Sn+1  S 0 :
It remains to show (2.17). By the de nition of f S 0 ; P S 0 ; (2.15) and (2.16) we have
already S 0  S : Hence
(2.18) SS 0 j= Diag(S 0 )
by Corollary 2.2.5. We show
(2.19) F 2 Th(S 0 S 0 ) ) SS 0 j= F
by induction on the length of F: Without loss of generality we may assume that F is
translated into the Tait-language of L: Then (2.18) covers the case that F is atomic
and we need not consider negations. If F = (F0 ^ F1) we have F0; F1 2 Th(S 0 S 0 )
and get SS 0 j= F0 ^ F1 by the induction hypothesis. Similarly if F = (F0 _ F1) we
have F0 2 Th(S 0 S 0 ) or F1 2 Th(S 0 S 0 ) which entails SS 0 j= F0 _ F1 by the induction
hypothesis. Let F = 9xG: Then there is an s 2 S 0 such that S 0 S 0 j= Gx(s): By
induction hypothesis this gives SS 0 j= Gx(s) which by S 0  S entails SS 0 j= 9xG:
Assume F 2 Th(S 0 S 0 ) but SS 0 6j= F: Then
(2.20) SS 0 j= 8x:G:
Let (s1 ; : : : ; sk ) be the list of all constant symbols from LS 0 n L occurring in G:
Then G = G0x ;::: ;xk (s1 ; : : : ; sk ) and (2.20) entails S(s ;::: ;sk );:G 6= ;; and thus also
1 1 0

## (2.21) S j= :G0[s1 ; : : : ; sk ; S(s1 ; : : : ; sk ; :G0)]:

But s1 ; : : : ; sk 2 Sn for some n 2 IN which entails
S(s1 ; : : : ; sk ; :G0) 2 S n = Sn+1  S 0 :
Let s = S(s1 ; : : : ; sk ; :G0): From F 2 Th(S 0 S 0 ) we get
S 0 S 0 j= G0x ;::: ;xk ;y (s1 ; : : : ; sk ; s)
1

which by induction hypothesis entails SS 0 j= G0x ;::: ;xk ;y (s1; : : : ; sn; s): This, however,
1
contradicts (2.21) and 1.3.6. This terminates the proof of (2.19) and (2.19) entails
S0  S:
Here we can give another proof of the second part of Lemma 2.2.10. Let T 0 be de ned
106 II. Model Theory
as in the proof of 2.2.10. Since we have seen that every nite subset of T 0 has a model
S by the compactness theorem for logic with identity. Because all the new constant
symbols have to be interpreted di erent we have card(S)  : Since   card(T) we
obtain by Lowenheim-Skolem downwards a model S 0 of T with card(S 0 ) = : The
Lowenheim-Skolem downwards theorem is another reason for some limits of rst order
logic. For example it is not possible to characterise the real numbers R (e.g. viewed
as a eld) up to isomorphism. To be more explicit there is no set M of rst order
sentences such that we have
S j= M i S = R
since we would nd some countable structure S  R and so S cannot be isomorphic
to R: In this argumentation we have assumed that R is an L-structure for a countable
language L: If we make no restrictions to the cardinality of L we are able to construct
a structure R  S with card(S) > card(R) by the next theorem. Though the following
theorem is named by Lowenheim and Skolem it is due to Alfred Tarski [ 1901,
y1983] and Robert L. Vaught [ 1926] in 1957.
Theorem 2.2.12 (Lowenheim-Skolem upwards). Let S be an L-structure and let
 = card(S)  @0 : For any cardinal
  maxf; card(L)g
there is an elementary extension S 0  S such that card(S 0 ) = :
Proof. Let T = Th(SS ): Since   @0 S is an in nite model of T which entails
that T has a model S 0 of a given cardinality   card(T) by Lemma 2.2.10. We
have S 0 j= Th(SS ) and thus S 0  S up to isomorphism. Because we have card(T) =
maxf; card(L)g we are done.
Theorem 2.2.13 (Vaught's test). Let   @0 and T be a consistent -categorical
theory without nite models and card(T)  : Then T is complete.
Proof. Let S ; S0 2 ModL(T ): Then we obtain two models S 0 ; S 00 of T of cardinality
 by Lowenheim-Skolem upwards. Thus S  S 0  = S 0 0  S0 which entails S  S0 : By
2.2.9 this entails the completeness of T:
In the exercises we will learn that the theory DLO of dense linear orderings without
endpoints is @0 -categorical. Since DLO has no nite models we know by Vaught's test
that DLO is complete and more:
DLO is complete for (Q; <Q):

Exercises
E 2.2.1. Prove Proposition 2.2.3.
E 2.2.2. Let L be a rst order language only with the non-logical symbol =. Deter-
mine easy criteria such that for all L-structures S ; S 0
2.2. Completeness and Categoricity 107
a) S 
= S 0:
b) S  S 0 :
E 2.2.3. Let S be an L-structure and X  S: De ne
\
S0 = fS 0 : S 0  S and X  S 0 g:
a) Prove that there is a substructure S0 of S with domain S0 :
b) Show that it is S0 = ftS [s1 ; : : : ; sn] : t L-term and s1 ; : : : ; sn 2 X g:
E 2.2.4. If T is a set of 8-formulas, S2 j= T and S1  S2, then it is S1 j= T:
E 2.2.5. If T is a set of 89-formulas (i.e. formulas of the shape
8x1 : : : 8xn9y1 : : : 9ym F
with F quanti er free) and (Sn )n2IN is a sequence of L-structures with Sn  Sn+1 and
Sn j= T for all n 2 IN then one has S j= T; where S = Sn2IN Sn is de ned in the
obvious way.
E 2.2.6. If T is a set of positive formulas (i.e. formulas which are build up from atomic
formulas by ^; _; 8; 9) then T will be preserved under epimorphisms. I.e. if
S1 j= F[s1; : : : ; sn]
and ' is an epimorphism onto S2. So
S2 j= F['(s1); : : : ; '(sn )]
for all F 2 T:
E 2.2.7. Let S0  S1  S2 be L-structures. Prove or disprove:
a) S0  S1 and S1  S2 ) S0  S2 :
b) S0  S1 and S0  S2 ) S1  S2 :
c) S0  S2 and S1  S2 ) S0  S1 :
E 2.2.8 (Tarski's lemma). Let (Sn )n2IN be a sequence of L-structures with Sn
Sn+1 for all n 2 IN: Prove that for all n 2 IN
[
Sn  Si :
i2IN
E 2.2.9. Let S ; S 0 be L-structures with S  S 0 : Prove that S  S 0 i for all formulas
F and all s1 ; : : : ; sn 2 S one has: if there is an s0 2 S 0 with S 0 j= F[s1; : : : ; sn ; s0];
then there is an s 2 S with S 0 j= F[s1 ; : : : ; sn ; s]:
E 2.2.10. Prove that a theory T is categorical i it is complete and has a nite model.
108 II. Model Theory
E 2.2.11. Let L be a language with equality containing only a unary predicate symbol
P: The theory T should determine that P is true for in nitely many objects and is
false for in nitely many objects.
a) Give an exact axiomatisation of T:
b) Prove: T is @0 -categorical (@0 = card(IN)):
c) Prove: T is not -categorical for all uncountable :
E 2.2.12. Let S = (Q; <Q) and S 0 = (R;<R) the L(<)-structures of the rational and
real numbers, respectively. Prove the following claims:
a) Is g : R ! R bijective and strictly monotone increasing, then we have for all
r1; : : : ; rn 2 R
S 0 j= F[r1; : : : ; rn] , S 0 j= F[g(r1); : : : ; g(rn)]:
b) For q1; : : : ; qn 2 Q and r 2 R there is a bijective and strictly monotone increasing
function g : R ! R with
g(r) 2 Q and g(q1 ) = q1; : : : ; g(qn) = qn:
c) S  S 0 :
d) S 6= S 0 :
E 2.2.13. A set I 6= ; of partial isomorphisms from S to S 0 has the `back-and-forth'-
property. I.e.
1. 8f 2I 8x2S 9g2I(f  g ^ x 2 dom(g))
2. 8f 2I 8y2S 0 9g2I(f  g ^ y 2 rg(g)):
Here f  g denotes that g extends f in the following sense:
dom(f)  g ^ 8x2 dom(f)(f(x) = g(x)):

Prove: S = S : 0
E 2.2.14. Let DLO be the theory of the dense linear orderings without endpoints,
i.e.
1. 8x8y(x < y ! 9z(x < z ^ z < y))
2. 8x8y(x < y _ x = y _ y < x)
3. 8x8y(x < y ^ y < z ! x < z)
4. 8x(:x < x)
5. 8x9y(x < y)
6. 8x9y(y < x)
Prove:
a) (G. Cantor [1845, y1918], 1895) DLO is @0 -categorical.
b) There are two models of DLO of the same cardinality which are not isomorphic.
Hint: Use E 2.2.13 for a) and take R and R extended by a copy of Q on the left hand
side.
2.3. Elementary Classes and Omitting Types 109
2.3 Elementary Classes and Omitting Types
De nition 2.3.1. Let K be a collection of L-structures.
a) We call K an elementary class if K = ModL (T ) for some rst order theory T:
b) We call K nitely axiomatisable if K = ModL (T) for some nite set of L-sentences
T.
There is a wide variety of nitely axiomatisable structures, e.g. groups, Abelian groups,
domains, integral domains, elds, elds of xed characteristic p 6= 0, ordered elds,
lattices, etc. In fact all these structures are de ned as the model classes of certain
nite axiom systems. However, there are collections of structures which are not even
elementary let alone nitely axiomatisable. An easy example is given by the following
lemma.
Lemma 2.3.2. Let K be the collection of structures which are all isomorphic to some
in nite structure S : Then K is not elementary.
Proof. Assume K = ModL(T) for some theory T: Since S is in nite we obtain a
elementary extension S 0  S such that card(S 0 ) > card(S) by the Lowenheim-Skolem
theorem. S 0 and S cannot by isomorphic, hence S 0 2= K,
S 0 j= Th(SS ) and Th(SS )  Th(S )  T
which shows S 0 j= ModL (T ) in contradiction to our assumption.
Indeed we have proven a stronger result than 2.3.2. What we really have shown is:
Theorem 2.3.3. Let K be a class of structures, all having the cardinality   @0 :
Then K is not elementary.
2.3.2 and 2.3.3 already show us some limitations for the characterisation of structures
by rst order axiom systems. As a corollary of 2.2.10 we also get:
Theorem 2.3.4. Let K be a class of nite structures of arbitrarily large nite cardi-
nality. Then K is not an elementary class.
Proof. K = ModL(T) for some theory T implies by 2.2.10 that there are in nite struc-
tures in K: This contradicts the hypothesis that K contains only nite structures.
Thus the classes of the nite groups or nite elds are not elementary.
Lemma 2.3.5. Let K = ModL(T) be nitely axiomatisable. Then there is a nite
axiom system Ax which is contained in T, i.e. Ax  T:
Proof. Since K is nitely axiomatisable there is a single sentence F such that K =
ModL (fF g) (e.g. let F be the conjunction of all the sentences of a nite axiom system
for K). Then T j= F and by the compactness theorem there is a nite subset Ax  T
such that Ax j= F: If G 2 ModL (Ax), then
G 2 ModL (fF g) = ModL (T) and ModL (T)  ModL (Ax)
110 II. Model Theory
holds trivially because of Ax  T: Hence
ModL (T ) = ModL (Ax):

Now we will leave the topic of elementary classes and turn to the notion of a type.
The idea of a type is an approach to characterise some given objects (or more precise:
a tuple of objects) by the set of properties which are true of those. Such a set will be
called a type.
De nition 2.3.6. Let S be an L-structure, s1; : : : ; sn 2 S and T an L-theory.
a) The type of the n-tuple (s1 ; : : : ; sn ) is the set of L-formulas
= fF : FV(F)  fx1; : : : ; xng and S j= F[s1; : : : ; sn]g:
b) An n-type in S is a type for an n-tuple of elements of S:
c) An n-type in T is an n-type in some model of T:
Up to now we have only been concerned with the 0-type in S : This is
Th(S ) = fF : F is L-sentence and S j= F g:
Next we observe some simple facts about types. The rst fact is that we have either
F 2 or :F 2 if is an n-type in some L-structure and FV(F)  fx1 ; : : : ; xng:
Proposition 2.3.7. Let 0; 1 be n-types in S : 0  1 implies 0 = 1 :
Proof. If we assume 0 6= 1 , then there is an F with FV(F)  fx1; : : : ; xng such
that F 2 1 and F 2= 0 : By the above observation it is :F 2 0  1 : This is a
Proposition 2.3.8. Let be an n-type in an L-theory T and let F1; : : : ; Fk; G be
L-formulas with free variables among x1 ; : : : ; xn:
a) If T j= F1 _ : : : _ Fk ; then there is an Fi; 1  i  k such that Fi 2 .
b) If T j= F1 ^ : : : ^ Fk ! G and F1; : : : ; Fk 2 , then it is G 2 .
The proofs are obtained by taking a look at the de nition of an n-type in T:
Proposition 2.3.9. Let T be a countable theory, i.e. card(T)  @0 ; and M 6= ; a set
of formulas with free variables among x1; : : : ; xn such that for all formulas F1 ; : : : ; Fk 2
M it is
T 6j= :F1 _ : : : _ :Fk :
Then there is an n-type in a countable model of T with M  :
2.3. Elementary Classes and Omitting Types 111
Proof. It is T [ M consistent since otherwise by the compactness theorem there are
F1; : : : ; Fk 2 M such that T [ fF1; : : : ; Fk g is inconsistent. But then
T j= :F1 _ : : : _ :Fk :
But this is forbidden. By the Lowenheim-Skolem downwards theorem there is a count-
able model S of T and s1 ; : : : ; sn 2 S such that
S j= M[s1 ; : : : ; sn ]:
But this means, if is the type of (s1 ; : : : ; sn); M  :
Corollary 2.3.10. Let T be a countable theory. If is an n-type in T, then there is
a countable model S of T such that is an n-type in S :
Proof. By Proposition 2.3.9 (with M = ) and Proposition 2.3.7 we have to prove
T 6j= :F1 _ : : : _ :Fk
for all F1 ; : : : ; Fk 2 : But if there are F1; : : : ; Fk 2 such that
T j= :F1 _ : : : _ :Fk ;
then by Proposition 2.3.8 there is an Fi such that :Fi 2 . This is not possible.
De nition 2.3.11. Let T be an L-theory and M a set of L-formulas. The formula F
is a T-generator of M if
1. T [ fF g is consistent.
2. T [ fF g j= G for all G 2 M:
So if F is a T -generator of M; then it is possible to generate all formulas of M out of
F using T: The next lemma shows that an n-type in T is determined by any generator
if there is one.
Lemma 2.3.12. Let be an n-type in T and F a T-generator of with free variables
among x1; : : : ; xn. Then it is
= fG : FV(G)  fx1; : : : ; xng and T j= F ! Gg:
Proof. We have `' since F is a T-generator of . For the other inclusion take a
formula G with FV(G)  fx1 ; : : : ; xng and
(2.22) T j= F ! G:
Since F is a T -generator of we have
F2
because otherwise we had :F 2 and so
T j= F ! :F:
112 II. Model Theory
This would imply that T j= :F; but this means T [ fF g is inconsistent. Using Propo-
sition 2.3.8 and (2.22) we obtain G 2 :
We close this section with a theorem of A. Ehrenfeucht.
Theorem 2.3.13 (Omitting types theorem). Let T be a countable and consistent
theory, and let M be a set of formulas with free variables among x1; : : : ; xn without a
T-generator. Then there is a countable model S of T such that no n-type in S contains
M.
Proof. Let T 0 = T [ HL(T ) be the Henkin extension of T: Since T is countable so is
T 0 by Lemma 1.5.11. So there is an enumeration of the n-tuples of Henkin constants.
Now de ne inductively a sequence of L(T 0)-sentences such that
a) Tk = T 0 [ fF1; : : :Fk g is consistent.
b) Fk is :Fx ;::: ;xn (c1 ; : : : ; cn) where F 2 and (c1; : : : ; cn) is the kth n-tuple in
1
the above enumeration.
First observe that T 0 is consistent by Theorem 1.5.7. Now suppose in step k the
sentences F1 ; : : : ; Fk 1 have been de ned. Let (c1; : : : ; cn) be the kth n-tuple. Let
 = fF : F is L(T)-formula,FV(F)  fx1; : : : ; xng
and Tk 1 j= Fx ;::: ;xn (c1; : : : ; cn)g:
1

## For any F 2  we have

T 0 j= F1 ^ : : : ^ Fk 1 ! Fx ;::: ;xn (c1 ; : : : ; cn):
1

## So there is an L(T )-formula G, such that

T j= Gx ;::: ;xm (c1 ; : : : ; cm ) ! Fx ;::: ;xn (c1 ; : : : ; cn)
1 1

for m  n and
(2.23) Tk 1 j= Gx ;::: ;xm (c1 ; : : : ; cm ):
1

This implies
T j= 9xn+1 : : : 9xm Gx ;::: ;xn (c1 ; : : : ; cn) ! Fx ;::: ;xn (c1; : : : ; cn)
1 1

and
Tk 1 j= 9xn+1 : : : 9xm Gx ;::: ;xn (c1 ; : : : ; cn):
1

## Since Tk 1 is consistent we have

Tk 1 6j= :9xn+1 : : : 9xm Gx ;::: ;xn (c1 ; : : : ; cn)
1

## and this yields by Exercise E 1.6.4

T 6j= :9xn+1 : : : 9xm G:
2.3. Elementary Classes and Omitting Types 113
This means T [ f9xn+1 : : : 9xm Gg is consistent and since M has no T-generator there
is an F 2 M such that
T 6j= 9xn+1 : : : 9xm G ! F:
By (2.23) and the de nition of  we obtain
F 2= :
De ne Fk = :F(c1; : : : ; cn): Then Tk is consistent. This nishes the construction of
the formulas (Fk )k2IN :
If we set
T~ = T 0 [ fFk : k 2 INg;
then by the compactness theorem T~ is consistent and countable. Now let S be the
model of T~ construed in the proof of the compactness theorem. Then S is countable
and for every t 2 S there is a Henkin constant c such that t = cS because t is a term
and
S j= 9x(x = t):
So for (s1 ; : : : ; sn) 2 S we nd an n-tuple (c1 ; : : : ; cn) of Henkin constants with
n
si = cSi for 1  i  n: By the above construction we nd an F 2 M such that
T~ j= :Fx ;::: ;xn (c1 ; : : : ; cn)
1

and because of S j= T~
S j= :F[s1; : : : ; sn ]:
This means that F is not in the n-type of (s1 ; : : : ; sn ): This shows that M is not
contained in any n-type in S :
Exercises
E 2.3.1. Prove that K is an elementary class i K is the intersection of a family of
nitely axiomatisable classes.
E 2.3.2. Let (Ki )i2I be a collection of elementary classes and let K  Ti2I Ki be a
nitely axiomatisable
T class. Show that there is a nite subcollection (Ki )i2I ; I0  I
such that K  i2I Ki :
0

## E 2.3.3. Let (Kn)n2IN be a sequence of nitely axiomatisable

T classes such that for all
n 2 IN Kn+1 is a proper subclass of Kn: Prove that n2IN Kn is not nitely axiomatis-
able.
114 II. Model Theory
Chapter 3
Fundamentals of the Theory of Decidability

## In this chapter we are going to study the functions

f : INn ! IN
which are `e ectively computable'. This means we will study those functions which
are given by some algorithm. But what is an algorithm? We may have an intuitive
conception of an algorithm, but we want a mathematically precise de nition. That
may be necessary if we want to prove that a function is not e ectively computable,
i.e. that there is no algorithm computing the function.
Here we have chosen a mathematical way to introduce e ectively computable func-
tions. First we are going to single out a very natural class of e ectively computable
functions:
the primitive recursive functions
where the algorithmic feature is a simple kind of recursion. In the rst two sections
we will see that the primitive recursive functions have very nice properties concerning
computability. But we will see that there are computable functions which are not
primitive recursive. This leads to the notion of a recursive function, which is the class
of functions we study in the remaining sections. In section 3.7 we give another access
to computability which is more close to real computers. There we de ne the class of
functions computable on a
random access machine
which di ers from real computers only by assuming that it is able to store arbitrary
large natural numbers. In that section we will learn that the functions which are
computable on such machines are exactly
the recursive functions
which will justify our access. The last section of this chapter is devoted to a question
going back to the ideas of G. W. Leibniz. He wanted to construct a machine deciding
if some given mathematical thesis is true or false. There we will see that he had to
fail.
116 III. Theory of Decidability
3.1 Primitive Recursive Functions
De nition 3.1.1. The basic functions are the following.
1. The constant function Ckn : INn ! IN with Ckn(z1 ; : : : ; zn) = k:
2. The kth projection Pkn : INn ! IN with Pkn(z1 ; : : : ; zn ) = zk for
1  k  n:
3. The successor function S : IN ! IN with S(z) = z + 1:
Proposition 3.1.2. Every basic function is e ectively computable.
In the next step we are going to close the basic functions under basic operations which
create new functions out of given ones.
De nition 3.1.3. The basic operations are
1. Substitution Sub(f; g1; : : : ; g#f ) : INn ! IN with
Sub(f; g1; : : : ; g#f )(~z ) = f(g1 (~z ); : : : ; g#f (~z))
where we have assumed that #g1 = : : : = #g#f = n:
2. Primitive recursion R(g; h) : IN#g+1 ! IN for #h = #g +2 obeying the recursion
equations
(R0) R(g; h)(~z; 0) = g(~z )
(RS) R(g; h)(~z; x + 1) = h(~z; x; R(g; h)(~z; x)):
Proposition 3.1.4. R(g; h) is the unique function obeying the recursion equations
(R 0) and (R S):
Proof. Assume that H : IN#g+1 ! IN is another function satisfying (R0) and (R S):
By induction on x we immediately show
H(~z ; x) = R(g; h)(~z; x):
Proposition 3.1.5.
a) If f; g1 ; : : : ; gn are e ectively computable, then so is Sub(f; g1; : : : ; gn):
b) If g and h are e ectively computable, then so is R(g; h):
Proof. The algorithm to compute Sub(f; g1 ; : : : ; gn)(~z) is obvious. To prove b) con-
sider the following algorithm for computing R(g; h)(~z; x) which incorporates some kind
of recursion. First, compute g(~z) and call the result r0. Now go on to compute
r1 = h(~z; 0; r0); r2 = h(~z ; 1; r1); : : : ; rn+1 = h(~z; n; rn)
and so on until computing rx. Now, induction on x proves
R(g; h)(~z; x) = rx:
So this yields an algorithm for computing R(g; h):
Now we are able to determine a class of e ectively computable functions. The following
de nition is due to T. Skolem (1923) and K. Godel (1931).
3.1. Primitive Recursive Functions 117
De nition 3.1.6. The class of primitive recursive functions is the smallest class of
functions which contains all the basic functions and is closed under substitution and
primitive recursion.
Proposition 3.1.7. All primitive recursive functions are e ectively computable.
Proof. This follows immediately from 3.1.2 and 3.1.5.
Examples of primitive recursive functions are:
Addition, de ned by the recursion equations
a+0= a
a + (x + 1) = S(a + x)
i.e. + = R(P11; Sub(S; P33)):
Multiplication, de ned by
a0= 0
a  (x + 1) = (a  x) + a:
Exponentiation, de ned by
a0 = 1
ax+1 = ax  a:
Predecessor pd : IN ! IN, de ned by
pd(0) = 0
pd(x + 1) = x:
Arithmetical di erence _ : IN2 ! IN, de ned by
a_0 = a
a _ (x + 1) = pd(a _ x):
We will need the following properties of the arithmetical-di erence.
Proposition 3.1.8.
a) 0 _ x = 0
b) Sx _ Sy = x _ y
c) (a + x) _ x = a
d) a _ a = 0
e) Sy _ x = S(y _ x) for y  x
We give the proofs as examples. Proofs of this type, however, will later be left to the
118 III. Theory of Decidability
a) is shown by induction on x:
0 _ 0 = 0 holds by de nition and
0 _ Sx = pd(0 _ x) =i:h: pd(0) = 0:
b) is shown by induction on y:
Sx _ S0 = pd(Sx _ 0) = pd(Sx) = x = x _ 0
and
Sx _ S(Sy) = pd(Sx _ Sy) =i:h: pd(x _ y) = x _ Sy:
c) is shown by induction on x:
(a + 0) _ 0 = a _ 0 = a
and
(a + Sx) _ Sx = pd((a + Sx) _ x) = pd(S(a + x) _ x)
= S(a + x) _ Sx =b) (a + x) _ x =i:h: a
d) follows from c) for a = 0 which indeed needs rst proving 0 + a = a: This,
however, is left as an exercise. One may as well trust in the fact that addition
and multiplication are properly de ned, i.e. that they have all the properties we
are used to.
e) To prove this we observe that Sy _ 0 = Sy = S(y _ 0): For x = Sz we get:
b) _
Sy _ x = Sy _ Sz = y z = S(pd(y _ z)) = S(y _ Sz) = S(y _ x):
To know y _ z = S(pd(y _ z)) we use the fact that y _ z > 0:

## A sometimes useful notation is -abstraction for f : INn+m ! IN

x1 : : :xn :f(x1 ; : : : ; xn; y1; : : : ; ym )
by which we denote that f(x1 ; : : : ; xn; y1; : : : ; ym ) is a function of the arguments
x1; : : : ; xn while y1 ; : : : ; ym are regarded as xed. Thus
x1 : : :xn:f(x1 ; : : : ; xn; y1 ; : : : ; ym ) = Sub(f; P1n; : : : ; Pnn; Cyn ; : : : ; Cynm )
1

which shows that the primitive recursive functions are closed under -abstraction.
Proposition 3.1.9. It is for all x1; x2 2 IN
min(x1 ; x2) = x1 _ (x1 _ x2 ):
And so the function xy: min(x; y) is primitive recursive.
3.1. Primitive Recursive Functions 119
Proof. If x1  x2, then x1 _ x2 = 0 and x1 _ (x1 _ x2 ) = x1 and if x1 > x2, then
we show x1 _ (x1 _ x2) = x2 by induction on x1: Let x1 = y + 1: Then by the last
proposition
x1 _ (x1 _ x2) = (y + 1) _ ((y + 1) _ x2)
=e) (y + 1) _ S(y _ x2)
=b) y _ (y _ x2 ):
If y > x2 this yields y _ (y _ x2) = x2 by induction hypothesis and if y = x2 we get
y _ (y _ x2 ) = x2 _ 0 = x2 by d).
Another important function is the sign function de ned by
sg(0) = 0 and sg(x + 1) = 1
and its dual sg de ned by sg(0) = 1 and sg(x + 1) = 0:
De nition 3.1.10. A relation R  INn is primitive recursive if its characteristic func-
tion R : INn ! IN, de ned by
(
R (z1 ; : : : ; zn ) = 1 if (z1 ; : : : ; zn) 2 R
0 otherwise,
is primitive recursive.
De nition 3.1.11. Let K be a class of relations.
a) K is closed under boolean operations if for any n-ary truth function ' and
R1; : : : ; Rn 2 K (with the same arity) it is
f~z : '(R1(~z ); : : : ; Rn(~z)) = tg 2 K:
b) K is closed under primitive recursive substitution if for R 2 K with #R = n and
for any primitive recursive functions f1 ; : : : ; fn (with the same arity) it is
f~x : (f1 (~x); : : : ; fn (~x)) 2 Rg 2 K:
c) K is closed under bounded quanti cation if for R 2 K with #R = n + 1 the sets
f(~x; y) : 8z  yR(~x; z)g and f(~x; y) : 9z  yR(~x; z)g
belong to K:
Lemma 3.1.12. The primitive recursive relations are closed under boolean operations
and primitive recursive substitution.
120 III. Theory of Decidability
Proof. Since :; ^ form a complete set of connectives it suces to show that the
primitive recursive relations are closed under : and ^, i.e. if R and S are primitive
recursive so are
Q = f(z1 ; : : : ; zn) : (z1 ; : : : ; zn ) 2= Rg
and
T = f(z1; : : : ; zn ) : (z1 ; : : : ; zn) 2 R and (z1 ; : : : ; zn ) 2 S g:
But Q = Sub(sg; R ) and T = R  S (i.e. T = Sub(; R; S )):
Finally if
Q = f~z : (f1 (~z ); : : : ; fn (~z)) 2 Rg
for primitive recursive functions f1 ; : : : ; fn, then Q = Sub(R ; f1; : : : ; fn) which
shows the closure under primitive recursive substitution.
Another important concept is the bounded search operator de ned by
(
xz P(z1; : : : ; zn; x) = minfx  z : P(z1; : : : ; zn; x)g if it exists
0 otherwise.
Lemma 3.1.13. Let P be an n + 1-ary primitive recursive relation. Then
z1 : : :zn z:xz P(z1; : : : ; zn; x)
is an n + 1-ary primitive recursive function.
Proof. We de ne f by the following recursion equations: f(~z; 0) = 0
f(~z ; y + 1) = f(~z ; y) + sg(f(~z ; y))  (y + 1)  P (~z; y + 1)  sg(P (~z; 0))
and prove f(~z ; y) = xy P(~z; x) by induction on y: If y = 0, then
f(~z ; y) = 0 = xy P(~z; x):
Let y = z + 1: Then f(~z ; z) = xz P(~z; x) by induction hypothesis. We distinguish
the following cases:
1. We have (~z; 0) 2 P: Then
xy P(~z; x) = xz P(~z; x) = 0
and we get f(~z ; z) = 0 by induction hypothesis and sg(P (~z ; 0)) = 0: Hence
f(~z ; y) = 0:
2. We have (~z; n) 2 P for some 0 < n  z: Then 0 6= xz P(~z; x) = f(~z ; z): Hence
sg(f(~z ; z)) = 0 and
f(~z ; y) = f(~z ; z) = xz P(~z; x) = xy P(~z; x):
3. (~z; n) 2= P for all 0  n  z but (~z; y) 2 P: Then xz P(~z; x) = 0 and
xy P(~z; x) = y: Hence f(~z ; z) = 0 and thus f(~z ; y) = z + 1 = y:
3.1. Primitive Recursive Functions 121
4. (~z; n) 2= P for all 0  n  y: Then
xz P(~z; x) = xy P(~z; x) = 0:
Then f(~z ; z) = 0 = P (~z; z + 1): Hence f(~z ; y) = 0:
Lemma 3.1.14. The class of the primitive recursive relations is closed under bounded
quanti cation.

Proof. Since
8x  zP(~z; x) , :9x  z :P(~z; x)
and the primitive recursive relations are closed under negations it suces to prove the
closure under bounded existential quanti cation. Thus let P be primitive recursive
and
Q = f(~z; z) : 9x  z(~z; x) 2 P g:
Then
Q (~z; z) = sg(P (~z; 0) + xz P(~z; x)):
Hence Q is primitive recursive.

Lemma 3.1.15 (De nition by cases). Let P1; : : : ; Pn be primitive recursive pred-
icates which are pairwise disjoint and g1; : : : ; gn+1 primitive recursive functions. Then
the function f de ned by
8
>
> g1 (~z) if P1(~z)
<... ..
.
f(~z ) = >
> g (~z ) if Pn(~z)
:gnn+1 (~z) otherwise
>

is primitive recursive.

Proof. We have
n
X n
X
f(~z ) = gi (~z)  Pi (~z) + gn+1(~z)  sg( Pi (~z)):
i=1 i=1

In the following tabular we give more examples of primitive recursive predicates and
functions.
122 III. Theory of Decidability

## Predicate or Function De nition

absolute di erence x1x2:jx1 x2j (x1 _ x2 ) + (x2 _ x1 )
equality f(x; y) : x = yg = = xy:sg(jx _ yj)
less than f(x; y) : x < yg < = xy: sg(y _ x)
less or equal than f(x; y) : x  yg x< y_x= y
x j y x divides y 9z  y(y = z  x)
Pr(x) `x is a prime' x=6 0 ^ x =6 1^
8z  x(:z jx _ z = 1 _ z = x)
fa1; : : : ; ang ` nite set' fa ;::: ;an g =
1
P
x: sg( ni=1 x=ai (x))
f~z : f(~z ) = 0g `zeros of a
primitive recursive function f'  = sg  f
P
g(~x; n) = ni=0 f(~x; i) g(~x; 0) = f(~x; 0)
f primitive recursive g(~x; n + 1) = g(~x; n) + f(~x; n + 1)
Q
h(~x; n) = ni=0 f(~x; i) h(~x; 0) = f(~x; 0)
f primitive recursive h(~x; n + 1) = h(~x; n)  f(~x; n + 1)

Exercises
E 3.1.1. Give a description of the following functions using only basic functions and
basic operations.
a)  : IN2 ! IN; (a; b) 7! a  b
b) exp : IN2 ! IN; (a; b) 7! ab
c) pd : IN ! IN; a 7! pd(a)
d) _ : IN2 ! IN; (a; b) 7! a _ b
E 3.1.2. Prove: a _ a = 0
E 3.1.3. Prove that the following functions are primitive recursive.
a) f1 (x) = [log2 (x + 1)]
where [r] is the greatest integer  r for r 2 R:
3.1. Primitive Recursive Functions 123
p
b) f2 (x; y) = [ y x + 1]
1+

## E 3.1.4. Let g : IN ! IN be primitive recursive. De ne f : IN2 ! IN by

f(x; 0) = g(x)
f(x; y + 1) = f(f(x; y); y):
Prove that f is primitive recursive.
E 3.1.5. The Ackermann-Peter function f is de ned by the following recursion equa-
tions:
(I) f(0; y) = y + 1
(II) f(x + 1; 0) = f(x; 1)
(III) f(x + 1; y + 1) = f(x; f(x + 1; y))
Prove:
a) f : IN2 ! IN is a total function.
b) For all x 2 IN the function fx : IN ! IN de ned by
fx (y) = f(x; y)
is primitive recursive.
c) f(1; y) = y + 2
d) f(2; y) = 2y + 3
e) f(x; y) > x + y
f) f(x; y + 1) > f(x; y)
g) f(x + 1; y)  f(x; y + 1)
h) f(x + 1; y) > f(x; y)
i) f(x1 ; y) + f(x2 ; y)  f(x1 + x2 + 4; y)
j) 8x1 : : : 8xn 9x8y Pni=1 f(xi ; y)  f(x; y)
k) For all primitive recursive functions g : INn ! IN there is an x 2 IN such that
n
X
8x1 : : : 8xn g(x1 ; : : : ; xn) < f(x; xi):
i=1
Hint: Use induction on the de nition of the primitive recursive functions. If g is
built up by the recursor R, then choose x 2 IN such that
X
n X
n
8x1 : : : 8xn g(x1 ; : : : ; xn) + xi < f(x; xi)
i=1 i=1
l) f : IN2 ! IN is not primitive recursive.
124 III. Theory of Decidability
E 3.1.6. Prove that every monotone decreasing function f : IN ! IN is primitive
recursive.
E 3.1.7. Let p; q 2 IN with p; q > 0 and f : IN2 ! IN with
8x8y f(x + p; y) = f(x; y) = f(x; y + q):
Prove that f is primitive recursive.
E 3.1.8. Let f : IN ! IN be primitive recursive and
a) monotone decreasing
b) strictly monotone increasing.
Prove that rg(f)  IN is a primitive recursive relation.
E 3.1.9. We are going to de ne the class of polynomial functions inductively by:
1. Ckn; Pkn; S; +;  are polynomial
2. If f; g1 ; : : : ; gn are polynomial, then so is Sub(f; g1; : : : ; gn):
3. If f; g; h are polynomial and it is
8~x R(g; h)(~x)  f(~x);
then so is R(g; h):
Prove:
a) Every polynomial function is primitive recursive.
b) There is a primitive recursive function which is not polynomial.

## 3.2 Primitive Recursive Coding

The aim of this section is to provide primitive recursive coding functions, i.e. functions
cn : INn ! IN
together with decoding functions pni : IN ! IN such that
pni(cn (z1 ; : : : ; zn)) = zi :
There are very di erent possibilities to obtain such functions. Here we will use the
fact that every natural number is the product of uniquely determined prime powers,
i.e. we de ne
cn (x1; : : : ; xn) = p(0)x +1  : : :  p(n)xn +1
1

where x:p(x) is the function which enumerates the primes. Due to the uniqueness
of the factorisation of a natural number into prime powers it is obvious how to get
decoding functions. All we have to check is that this can be done primitive recursively.
The rst step in this direction is the following lemma.
3.2. Primitive Recursive Coding 125
Proposition 3.2.1. The function p which enumerates the primes is primitive recur-
sive.
Proof. p satis es the following recursion equations
(p0) p(0) = 2
(pS) p(z + 1) = xp(z)!+1 (Pr(x) ^ p(z) < x)
From the previous section it is obvious that this de nes p primitive recursively. But
of course we have to check that p satis es (p0) and (pS): p0 is obvious. All that needs
checking is that there is a prime in the interval ]p(z); p(z)!+1]: Towards a contradiction
assume that
]p(z); p(z)! + 1] \ Pr = ;:
Let q be some prime factor of p(z)! + 1: Then q  p(z) which entails qjp(z)!: Hence
De nition 3.2.2. Let (x; y) be the multiplicity of p(y) in the factorisation of x; i.e.
(x; y) = zx :(p(y)z+1 jx):
Then  : IN2 ! IN is primitive recursive and we have the following representation.
Proposition 3.2.3. If x 6= 0, then
Y
x
x= p(i) (x;i) :
i=0
Now we are prepared for the de nition of the coding and decoding functions which {
according to the common literature { will be denoted by <> and ( )i :
De nition 3.2.4.
a) <>= 0 code of the empty sequence
n
Y
hz0 ; : : : ; zn i = p(i)zi +1
i=0
b) (x)i = (x; i) _ 1:
Note that (x)i is de ned for arbitrary natural numbers x: However, not every natural
number codes a sequence. Obviously only those natural numbers code sequences which
have no gaps in their decomposition into prime factors, i.e. if p(i+1) is a prime factor,
then so is p(i): Thus we de ne
c) Seq = fz : z 6= 1 ^ 8i  z(p(i + 1)jz ! p(i)jz)g and call Seq the set of sequence
numbers.
126 III. Theory of Decidability
Finally we de ne the length of a sequence coded into x by
d) lh(x) = zx (:p(z)jx):
Proposition 3.2.5. ~x:h~xi; xi:(x)i and lh are primitive recursive functions, Seq is a
primitive recursive predicate such that
1: z 2 Seq ^ lh(z) = 0 , z =<> and
2: z = hz0 ; : : : ; zn 1i , z 2 Seq ^ lh(z) = n ^ 8i < n(zi = (z)i ):
Finally we de ne the concatenation of sequence numbers by a_ 0 = a; 0_ b = b and
a_ b = h(a)0 ; : : : ; (a)lh(a) _ 1; (b)0; : : : ; (b)lh(b) _ 1 i
for a; b 6= 0: Again this is de ned for arbitrary a; b: It is easily veri ed that xy:x_ y
is primitive recursive. For sequence numbers, however, we obviously get
hz0 ; : : : ; zni_ hx0; : : : ; xmi = hz0 ; : : : ; zn; x0; : : : ; xmi:
It is worth while to observe that (x)i < x holds for all natural numbers x 6= 0:
As a rst application of coding (there will be many more of them later) we show
that the primitive recursive functions are closed under course-of-values recursion. Let
f : INn+1 ! IN be any function. The course-of-values function f : INn+1 ! IN is
de ned by
(CV 0) f(~z; 0) =<>
(CV S) f(~z; x + 1) = f(~z; x)_ hf(~z ; x)i
Then we have the following proposition.
Proposition 3.2.6.
a) f(~z; n + 1) = hf(~z ; 0); : : : ; f(~z ; n)i and
b) f is primitive recursive i f is.
Proof.
a) follows by induction on n: For n = 0 we have f(~z ; 1) = hf(~z ; 0)i by (CV 0) and
(CV S) and for n = x + 1 we get
f(~z ; n + 1) = f(~z; x + 1)_ hf(~z ; x + 1)i
=i:h: hf(~z ; 0); : : : ; f(~z ; x)i_hf(~z ; x + 1)i
= hf(~z ; 0); : : : ; f(~z ; n)i:
b) If f is primitive recursive, then f is obviously also primitive recursive. If f is
primitive recursive, then f(~z ; x) = (f(~z ; x+ 1))x which shows that f is primitive
recursive.
3.2. Primitive Recursive Coding 127

The following result goes back to Thoralf Skolem (1923) and Rosza Peter [ 1905,
y1977] (1934).
Lemma 3.2.7 (Course-of-values recursion). Let g : INn+2 ! IN be primitive re-
cursive. Then the function h : INn+1 ! IN satisfying
h(~z; x) = g(~z; x; h(~z; x))
is uniquely determined and primitive recursive.
Proof. The course-of-values of h is given by

h(~z; 0) =<>
h(~z; x + 1) = h(~z; x)_ hg(~z; x; h(~z; x))i:
Thus h is uniquely determined and primitive recursive. But then the same holds for
h:
Corollary 3.2.8. Let g : INn+k+1 ! IN; g0 : INn ! IN and hi : IN ! IN; i = 1; : : : ; k be
primitive recursive functions such that hi(x) < x holds for all x > 0 and i = 1; : : : ; k:
Then there is a uniquely determined primitive recursive function f : INn+1 ! IN
satisfying (0
f(~z ; x) = g (~z) if x = 0
g(~z; x; f(~z; h1(x)); : : : ; f(~z; hk (x))) if x 6= 0:
Proof. We have
(0
f(~z ; x) = g (~z) if x = 0
g(~z; x; (f(~z; x))h (x) ; : : : ; (f(~z ; x))hk (x)) if x 6= 0:
1

## Uniqueness holds obviously.

Another application of coding is the principle of simultaneous recursion. This result
was mentioned by David Hilbert and Paul Bernays [1888,y1977] in 1934.
Lemma 3.2.9 (Simultaneous recursion). Let g1; : : : ; gn and h1; : : : ; hn be primi-
tive recursive functions. Then there are uniquely determined primitive recursive func-
tions f1 ; : : : ; fn satisfying
fi (~z; 0) = gi (~z)
and
fi (~z ; x + 1) = hi (~z; x; f1(~z ; x); : : : ; fn(~z; x))
for i = 1; : : : ; n:
128 III. Theory of Decidability
Proof. De ne a function f~ by
~ z ; 0) = hg1 (~z); : : : ; gn(~z)i
f(~
and
~ z ; x + 1) = hh1(~z ; x; (f(~
f(~ ~ z ; x))0; : : : ; (f(~
~ z ; x))n 1); : : : ;
~ z ; x))0; : : : ; (f(~
hn (~z; x); (f(~ ~ z ; x)n 1)i:
Then f~ is primitive recursive and we get
~ z ; x))i _ 1 for i = 1; : : : ; n:
fi (~z; x) = (f(~
Uniqueness follows straightforward by induction on x:
As another application of coding we will show that there are e ectively computable
functions which are not primitive recursive. To get this result we have to code primitive
recursive functions by natural numbers.
De nition 3.2.10. The Godel number pf q of a primitive recursive function f is de-
ned inductively.
1. pCknq = h0; n; ki
2. pPknq = h1; n; ki
3. pS q = h2; 1i
4. pSub(g; h1; : : : ; hn)q = h3; (ph1q)1; pgq; ph1q; : : : ; phnqi
5. pR(g; h)q = h4; (pgq)1 + 1; pgq; phqi:
Note that we have coded a primitive recursive function f in such way that #f = (pf q)1 :
Put
CPR = fe : e codes a primitive recursive functiong:
Then we observe the following.
Proposition 3.2.11. CPR is primitive recursive.
Proof. We have
e 2 CPR , e 2 Seq ^
[((e)0 = 0 ^ lh(e) = 3) _
((e)0 = 1 ^ lh(e) = 3 ^ 0 < (e)2  (e)1 ) _
((e)0 = 2 ^ lh(e) = 2 ^ (e)1 = 1) _
((e)0 = 3 ^ lh(e)  4 ^ 8i < lh(e)((i > 1 ! (e)i 2 CPR)
^ (i > 2 ! _(e)i1 = (e)1 )) ^ (e)21 = lh(e) _ 3) _
((e)0 = 4 ^ lh(e) = 4 ^ (e)2 2 CPR ^ (e)3 2 CPR
^ (e)1 = (e)21 + 1 ^ (e)31 = (e)1 + 1)]:
3.2. Primitive Recursive Coding 129
Lemma 3.2.12.
a) There are exactly @0 (= card(IN)) primitive recursive functions.
b) There are functions which are not primitive recursive.
Proof.
a) All constant functions are primitive recursive. So there are @0 primitive recursive
functions. Since distinct primitive recursive functions have distinct codes in
CPR  IN we have at most @0 primitive recursive functions.
b) There are 2@ (= card(2IN )) functions from IN to IN. As @0 < 2@ ; by the
0 0

## usual Cantor argument, there is function f : IN ! IN which is not primitive

recursive.

One should observe that pf q indeed codes an algorithm for the computation of f: The
leading component tells us whether we have to apply recursion, substitution or if we
have one of the basic functions. Since decoding is primitive recursive we may e ectively
construct a primitive recursive function [e] out of a code e 2 CPR: Thus the function
(
UPR (e; x) = [e](x) if e 2 CPR ^ (e)1 = 1
0 otherwise
is e ectively computable.
Theorem 3.2.13. UPR is not primitive recursive.
Proof. Towards a contradiction assume that UPR is primitive recursive. Then
x:UPR (x; x) + 1 is also primitive recursive. Let e0 be its code. Then
UPR (e0 ; e0) = [e0 ](e0 ) = UPR (e0 ; e0 ) + 1:
The diagonalisationargument in the proof of Theorem 3.2.13 reveals a general dilemma.
Whenever we try to de ne e ective computability in some precise way there will be
some algorithm for computing this functions and this algorithm can be coded. Thus
we will get some kind of coding and once we have a class of functions which can be
coded we may use the same diagonalisation argument as in 3.2.13 to show that there
are e ectively computable functions not belonging to that class. The way out of this
dilemma is to regard partial functions.

Exercises
E 3.2.1. Prove that the following predicates are primitive recursive.
a) P1 = fn : n is sum of two odd prim numbersg
130 III. Theory of Decidability
b) P2 = fn : n is sum of its divisors 6= ng
E 3.2.2. (Raphael M. Robinson, 1947) We de ne the iterative functions inductively
by the following clauses:
1. S; Ckn; Pkn; x:(x)i; x1 : : :xn :hx1; : : : ; xni are iterative functions.
2. If f; g1 ; : : : ; gn are iterative, then so is Sub(f; g1 ; : : : ; gn):
3. If g is iterative and unary and f : IN2 ! IN is de ned by:
f(0; x) = x
f(n + 1; x) = g(f(n; x));
then f is iterative.
Prove:
a) Every iterative function is primitive recursive.
b) Every primitive recursive function is iterative.
Hint: For f = R(g; h) observe the function s : IN2 ! IN with
s(n;~x) = hf(~x; n); n;~xi
Show that for g; h iterative so is s.
E 3.2.3. The Fibonacci-function is de ned by
f(0) = f(1) = 1
f(n + 2) = f(n) + f(n + 1):
Show that f is primitive recursive.
E 3.2.4. De ne  : IN2 ! IN with (n; m) = (m+n)(2m+n+1) + m: Prove that there are
primitive recursive functions 1; 2 : IN ! IN such that
(1 (n); 2(n)) = n
using simultaneous recursion.

## 3.3 Partial Recursive Functions and the Normal Form Theo-

rem
De nition 3.3.1.
a) A function f : M ! IN with M  INn is called a partial number theoretical
function. We denote this by f : INn P! IN. We call M the domain of f and put
dom(f) = M: If x 2= dom(f), then f(x) is unde ned which often is denoted by
f(x) " : On the other hand f(x) # stands for x 2 dom(f):
3.3. Partial Recursive Functions and the Normal Form Theorem 131
b) For partial functions f and g we de ne f(n) ' g(n) if
n 2= dom(f) [ dom(g) _ (n 2 dom(f) \ dom(g) ^ f(n) = g(n)):
(f(n) " ^g(n) ") _ (f(n) # ^g(n) # ^f(n) = g(n)):
The easiest way to memorise this relation is to think that f(x) = 1 if f(x) " where
1 is some element not in IN: For those extended functions f(x) ' g(x) really means
f(x) = g(x):
De nition 3.3.2. The unbounded search operator  assigns to a partial function
f : INn+1 P! IN the partial function (f) : INn P! IN de ned by
(f)(~z ) = minfx : f(~z ; x) = 0 ^ 8z < x9y 6= 0(f(~z ; z) ' y)g:
One should notice that (f) can be partial even if f was total, i.e. if dom(f) = INn+1 :
This is the case when f does not have zeros. We will also de ne the unrestricted
search operator for predicates. Let P  INn+1 be a predicate. Then (P) : INn P! IN
is de ned by
(P)(~z) = ((sgP ))(~z ) = minfx : (~z ; x) 2 P g:
De nition 3.3.3. (S. C. Kleene, 1938) The class of partial recursive functions is
the least class of partial functions which contains all the basic functions and is closed
under substitution, primitive recursion and unbounded search.
A function f : INn ! IN is recursive if it is partial recursive and total, i.e. dom(f) =
INn:
Proposition 3.3.4. Every recursive function is e ectively computable.
Proof. The proof is by 3.1.2, 3.1.5 and the additional observation that for an e ectively
computable total function f for which (f) is total, (f) is also e ectively computable.
The algorithm for (f) is to compute f(~z ; 0); f(~z ; 1) : : : successively. Since by
hypothesis (f)(~z ) # we eventually nd the least n 2 IN such that f(~z ; n) = 0 and
(f)(~z ) = n:
One should observe that a partial recursive function is in general not e ectively com-
putable. If we try to compute f(~x) by applying the algorithm for f given by its
de nition as a partial recursive function, it may well happen that the algorithm does
not terminate. During our computation, however, we cannot know that and may still
hope that it eventually will. Only for ~x 2 dom(f) the algorithm will terminate.
For that reason one sometimes calls partial recursive functions positively com-
putable. There is an algorithm which gives the correct answer in case that f(~x) # but
doesn't tell us anything in case that f(~x) " :
Proposition 3.3.5. Every primitive recursive function is recursive.
It is easy to code the partial recursive functions. All we have to do is to extend
De nition 3.2.10 by the additional clause
132 III. Theory of Decidability
6. pf q = h5; (pf q)1 _ 1; pf qi
Then we put
CP = fe : e codes a partial recursive functiong
and can easily extend Proposition 3.2.11 to
Proposition 3.3.6. CP is a primitive recursive set.
Proof. We refer to the proof of 3.2.11. All we have to do is to replace CPR by CP
((e)0 = 5 ^ lh(e) = 3 ^ (e)2 2 CP ^ (e)1 = (e)21 _ 1)
to the disjunction.
It is again obvious that we may easily reconstruct the function f from its code pf q:
Usually we denote the function with code e by feg: Thus we have a relation
R(e; ~z; x) , feg(~z) ' x
and our next aim is to determine the complexity of that relation. For that reason we
are going to code the computation of feg(~z): Informally we will do that by de ning a
sequence
c = h; e; i; hc1; : : : ; cnii
where  codes the output, e codes the function, i the input and c1; : : : ; cn the sub-
computations which are necessary for the computation of
feg((i)0 ; : : : ; (i)lh(i) _ 1 ):
Let Cmp denote the set of codes for computations. This we would like to manage as
follows: for the basic functions we have computations of the form:
hk; pCknq; i; <>i
h(i)k 1 ; pPknq; i; <>i
h(i)0 + 1; pS q; i; <>i:
In the case of a substitution
h0; pSub(g; h1 ; : : : ; hn)q; i; hh0; pgq; h1; : : : ; ni; : : : i;
h1; ph1q; i; : : : i; : : : ; hn; phnq; i; : : : ii:
For the recursor a computation should look like
hn; pR(g; h)q; i_hni; hh0; pgq; i; : : : i;
h1; phq; i_h0; 0i; : : : i; : : : ; hn; phq; i_hn _ 1; n _ 1i; : : : ii
and a computation with the unbounded search operator is given by
hn; pf q; i; hh0; pf q; i_ h0i; : : : i;
h1 ; pf q; i_ h1i; : : : i; : : : ; hn; pf q; i_ hni; : : : ii;
3.3. Partial Recursive Functions and the Normal Form Theorem 133
where 0; : : : ; n 1 6= 0 and n = 0: Then we have
c 2 Cmp ,c 2 Seq ^ lh(c) = 4 ^ (c)1 2 CP ^ (c)2 2 Seq ^
lh((c)2 ) = (c)11 ^ (c)3 2 Seq ^
[[(c)10 = 0 ^ (c)0 = (c)12 ^ (c)3 =<>] _
[(c)10 = 1 ^ (c)0 = (c)2((c) 1) ^ (c)3 =<>] _
12

## [(c)10 = 2 ^ (c)0 = (c)20 + 1 ^ (c)3 =<>] _

[(c)10 = 3 ^ 8i < lh((c)3 )((c)3i 2 Cmp) ^ (c)300 = (c)0 ^
(c)301 = (c)12 ^
8j < lh((c)3 )(j = 0 _ ((c)3j 1 = (c)1(j +2) ^ (c)3j 2 = (c)2 )) ^
8j < lh((c)302)((c)302j = (c)3(j +1)0)] _
[(c)10 = 4 ^ lh((c)3 ) = (c)2(lh((c) ) _ 1) + 1 ^
2

## 8i < lh((c)3 )((c)3i 2 Cmp) ^

(c)301 = (c)12 ^ (c)302 = h(c)20; : : : ; (c)2(lh(c) _ 2)i ^
2

2

2

3

## [(c)10 = 5 ^ lh((c)3 ) = (c)0 + 1 ^ 8i < lh((c)3 )((c)3i 2 Cmp ^

(c)3i1 = (c)12 ^ (c)3i2 = h(c)20 ; : : : ; (c)2(lh(c) _ 1) ; ii^)
2

3

## It is again obvious by the corollary to the course-of-values recursion that Cmp is a

primitive recursive set. Thus we get
fegn (z1 ; : : : ; zn) ' x , 9z(hx; e; hz1; : : : ; zni; z i 2 Cmp):
According to the common notation we de ne
T n (e; z1 ; : : : ; zn; z) , h(z)0 ; e; hz1 ; : : : ; zn i; (z)1i 2 Cmp:
and call T n the Kleene T -predicate. Then we have the following theorem by Stephen
Cole Kleene [ 1909] in 1936.
Theorem 3.3.7 (Kleene's normal form theorem). There is a primitive recursive
relation T n and a primitive recursive function U such that #T n = n + 2 and for every
partial recursive function f : INn ! IN there is a natural number e with
f(~z ) ' U(xT n(e; ~z; x)):
The proof of Theorem 3.3.7 is obvious by the above construction of the T n predicate.
In e the `program' computing the function f is coded. x codes the computation of f
applied to the arguments ~z, and U extracts the value f(~z ) from the computation. The
function U is just the decoding function x:(x)0 which is primitive recursive.
134 III. Theory of Decidability
Exercises
E 3.3.1. (S. C. Kleene, 1936) The class R is de ned inductively by
1. The basic functions are in R
2. R is closed under substitution and recursor
3. If f : INn+1 ! IN is in R and it is
8x1 : : : 8xn9y f(x1 ; : : : ; xn; y) = 0;
then it is f in R:
Prove:
a) Every function in R is recursive.
b) Every recursive function is an element of R.

## 3.4 Universal Functions and the Recursion Theorem

De nition 3.4.1. A partial function f : INn+1 P! IN is called universal for a class C
of n-ary partial functions, i.e. C  fg : g : INn P! INg if it is
8g2C9e8x1 : : : 8xn f(e; x1 ; : : : ; xn) ' g(x1 ; : : : ; xn):
Such an e 2 IN is called an index of g w.r.t. f.
As a corollary to Kleene's normal form theorem one obtains the following result, stated
by Emil Leon Post [ 1897, y1954] in 1922, Alan Mathison Turing [1912, y1954]
in 1936 and S. C. Kleene in 1938.
Lemma 3.4.2. The function n : INn+1 ! IN with
n (e; ~x) ' fegn(~x)
is partial recursive and universal for the class of n-ary partial recursive functions.
Proof. It is
fegn(~x) ' U(yT n (e; ~x; y))
so n is partial recursive and Theorem 3.3.7 shows that n is universal.
Lemma 3.4.3 (Snm -theorem). There is an (m + 1)-ary primitive recursive function
Snm : INm+1 ! IN, such that
8x1 : : : 8xm 8y1 : : : 8yn fegm+n (x1 ; : : : ; xm ; y1; : : : ; yn ) '
fSnm (e; x1; : : : ; xm )g(y1 ; : : : ; yn):
3.4. Universal Functions and the Recursion Theorem 135
Proof. It is fSnm (e; x1; : : : ; xm)g = Sub(feg; Cxn ; : : : ; Cxnm ; P1n; : : : ; Pnn) and we may
de ne
1

1

## = h3; n; e; h0; n; x1i; : : : ; h0; n; xmi; h1; n; 1i; : : : ; h1; n; nii

which is primitive recursive.
The Snm -theorem tells us that it is possible to obtain a code of the function
y1 : : :yn :f(x1 ; : : : ; xm ; y1; : : : ; yn)
out of a code for f in a primitive recursive manner. It is due to S. C. Kleene
(1938). Using this lemma it is possible to prove one main tool in recursion theory also
mentioned by Kleene in 1938.
Theorem 3.4.4 (Recursion theorem). For every n + 1-ary partial recursive func-
tion f there is an index e 2 IN such that
8x1 : : : 8xn fegn (x1; : : : ; xn) ' f(e; x1 ; : : : ; xn):
Proof. Since f is partial recursive so is
y~x:f(Sn1 (y; y);~x):
Now let e0 be an index of this function and de ne e = Sn1 (e0 ; e0): Then we have
fegn(~x) ' fSn1 (e0 ; e0 )gn(~x)
' fe0 gn+1 (e0 ;~x) by the Snm -theorem
' f(Sn1 (e0 ; e0);~x) by the de nition of e0
' f(e; ~x):

Here we are going to give an easy example, how to use the recursion theorem. We want
to prove, using the recursion theorem, that there is a recursive function f : IN2 ! IN
with
f(x; 0) ' x2
f(x; n + 1) ' f(f(x; n); n):
Therefore we are going to de ne h : IN3 P! IN by
( 2
h(e; x; n) ' x 2 2 if n = 0
feg (feg (x; n _ 1); n _ 1) if n 6= 0:
Using Theorem 3.4.2 h is partial recursive. Using the recursion theorem there is an
e 2 IN, such that
8x8n h(e; x; n) ' feg2(x; n):
136 III. Theory of Decidability
Now one can prove by induction on n : 8n8x feg2(x; n) #. So de ne f = feg2, i.e. f is
partial recursive and dom(f) = IN2 , i.e. f is total. So f is recursive.

Exercises
E 3.4.1. Prove that there is a primitive recursive function f : IN2 ! IN, such that
8x ff(e0 ; e1)g1 (x) ' fe0 g1(fe1 g1(x)):
E 3.4.2. Prove that the Ackermann-Peter function de ned in E 3.1.5 is recursive using
the recursion theorem.
E 3.4.3.
a) (R. Peter, 1935) Prove that the function UPR of section 3.2 is recursive.
b) Prove that there is a relation which is recursive (i.e. a relation R such that R
is recursive) but not primitive recursive.
E 3.4.4. Prove that there is no normal form theorem of the following shape: There
is a primitive recursive relation T  IN3 such that for all partial recursive functions
f : IN P! IN there is an e 2 IN such that
8x f(x) ' yT(e; x; y)
E 3.4.5. (S. C. Kleene, A. M. Turing, 1936) Prove that there is no recursive
universal function for the class of recursive functions f with rg(f)  f0; 1g:
E 3.4.6. Prove that the partial recursive functions are closed under
a) course-of-values recursion.
b) simultaneous recursion.

## 3.5 Recursive, Semi-recursive and Recursively Enumerable Re-

lations
De nition 3.5.1. (E. L. Post (1922), S. C. Kleene (1936)) We call a relation
R  INn
1. recursive, if R is a recursive function.
2. semi-recursive, if R = dom(f) for some partial recursive function f:
3. recursively enumerable, (brie y: r.e.) if R = ; or there is a recursive function f
such that
R = f(z1 ; : : : ; zn) : 9x(f(x) = hz1 ; : : : ; zni)g
i.e. f enumerates hRi = fhz1 ; : : : ; zni : (z1 ; : : : ; zn ) 2 Rg:
3.5. Recursive, Semi-recursive and Recursively Enumerable Relations 137
functions with ranges  INn , then we could have de ned the recursively enumerable
relations as the ranges of recursive functions. Because we have primitive recursive
coding functions we have the following result.
Proposition 3.5.2. A relation R  IN is recursively enumerable i R = ; or there is
a recursive function f such that
R = rg(f) = fx : 9y(f(y) = x)g:
One should notice that recursive predicates are decidable. To check whether ~z 2 P we
just have to compute P (~z): Semi-recursive predicates, however, are only positively
decidable. To check ~z 2 P = dom(f) we may apply the algorithm for f: If ~z 2 dom(f)
it will terminate and we get the answer `yes'. But it ~z 2= dom(f) we will never get an
answer. Recursively enumerable predicates are also only positively decidable. We may
create a recursively enumerable set R such that hRi = rg(f) by successively computing
the list f(0); f(1); f(2); : : : If ~z 2 R, then h~zi will eventually show up in that list, if
~z 2= R; however, this gives no answer because h~zi will never show up but at no point
we can be sure that it might possibly show up later.
Our next aim is to study the closure properties of the recursive, semi-recursive and
recursively enumerable relations.
Lemma 3.5.3. The class of recursive relations is closed under boolean operations,
bounded quanti cation and recursive substitution.
The proof is the same as that of 3.1.12 and 3.1.14. The key is that every primitive
recursive function is of course recursive and the closure under boolean operations and
bounded quanti cation was obtained by using primitive recursive functions as auxiliary
functions. The closure under recursive substitution is a consequence of the closure of
recursive functions under substitution which on its side holds trivially because partial
recursive functions are closed under substitution by de nition and substitution pre-
serves the totality of the involved functions.
The following theorem is a normal form theorem for recursively enumerable relations.
It is by S. C. Kleene in 1936, John Barkley Rosser [1907, y1989] in 1936 and
Andrzej Mostowski [ 1913, y1975] in 1947.
Theorem 3.5.4. A relation P  INn is r.e. i there is a recursive relation R  INn+1
such that
P = f(z1; : : : ; zn) : 9x(z1 ; : : : ; zn; x) 2 Rg:
Proof. Let P  INn be r.e. If P = ;, then put
R = f(z1 ; : : : ; zn; x) : C1n+1(z1 ; : : : ; zn; x) = 0g:
If hP i = rg(f), then
P = f(z1; : : : ; zn) : 9x(f(x) = hz1 ; : : : ; zni)g
138 III. Theory of Decidability
and
R = f(z1 ; : : : ; zn ; x) : f(x) = hz1 ; : : : ; zn ig
is recursive by 3.5.3.
For the opposite direction let
P = f(z1 ; : : : ; zn) : 9x(z1 ; : : : ; zn; x) 2 Rg
for some recursive relation R: If P 6= ; we choose (a1 ; : : : ; an) 2 P and de ne
(
f(x) = h(x)0 ; : : : ; (x)n _ 1 i if ((x)0; : : : ; (x)n) 2 R
ha1 ; : : : ; ani otherwise.
Then f is recursive and hP i = rg(f):
Of course 9x in 3.5.4 might be a dummy quanti er. So we have as a corollary the
following fact.
Lemma 3.5.5. Every recursive relation is r.e.
Later on we will see that the converse of 3.5.5 is false. Thus the recursive relations
form a proper subclass of the r.e. relations. To get the bridge to the semi-recursive
relations we de ne
Wen = f(z1 ; : : : ; zn) : 9y T n (e; z1 ; : : : ; zn; y)g;
i.e. Wen = dom(fegn ): Since every partial recursive function is feg for some e 2 CP we
see that (We )e2CP enumerates all the semi-recursive relations. Since f(z1 ; : : : ; zn ; y) :
T n (e; z1; : : : ; zn ; y)g is a recursive (even primitive recursive) relation we get by 3.5.4.
Proposition 3.5.6. Every semi-recursive relation is r.e.
On the other hand, if P is r.e., then P = f~z : 9x(~z; x) 2 Rg for some recursive relation
R: If we put
f(~z ) = x((~z; x) 2 R)
we get P = dom(f) which shows that P is semi-recursive. Thus we have the following
characterisation of the r.e. relations by S. C. Kleene (1936).
Theorem 3.5.7. The class of semi-recursive and the class of r.e. relations coincide.
After having seen that the semi-recursive and recursively enumerable relations coincide
we are going to study their closure properties.
Lemma 3.5.8. The class of r.e. relations is closed under positive boolean operations,
i.e. ^; _; bounded quanti cation, recursive substitution and unbounded existential
quanti cation.
3.5. Recursive, Semi-recursive and Recursively Enumerable Relations 139
Proof. Let P1 and P2 be r.e. relations. Then Pi = f~z : 9y(~z; y) 2 Rig for some
recursive relations Ri (i = 1; 2): But then we have
\ ^
P1 [ P2 = f~z : (9y1 (~z; y1) 2 R1) _ (9y2 (~z ; y2) 2 R2)g
^
= f~z : 9y((~z ; (y)0 ) 2 R1 _ (~z; (y)1 ) 2 R2)g:
Because of the closure of recursive relations under boolean operations and recursive
substitutions we have
f(~z; y) : (~z; (y)0 ) 2 R1 _^ (~z; (y)1 ) 2 R2 g
\
as a recursive relation. Thus P1 [ P2 is r.e. by 3.5.4. The closure under recursive
substitution again follows from 3.5.4 and the fact that the recursive relations are closed
under recursive substitution. Let's prove the closure under bounded quanti cation.
We postpone the case of bounded 9-quanti cation because it is entailed by unbounded
9-quanti cation. Thus let
Q = f(~x; z) : 8y  z(~x; y) 2 P g
for some r.e. relation P: We use 3.5.4 to get
Q = f(~x; z) : 8y  z 9u(~x; y; u) 2 Rg
for some recursive relation R: But then we claim
Q = f(~x; z) : 9v8y  z(~x; y; (v)y ) 2 Rg:
The inclusion `' is obvious. To show `' let (~x; z) 2 Q: Then for every y  z there is
an uy such that (~x; y; uy ) 2 R: Put v = hu0 ; : : : ; uz i to see that (~x; z) is a member of
the set on the right hand. To show closure under 9-quanti cation we assume
Q = f~z : 9y(~z ; y) 2 P g
for a r.e. relation P: Then by 3.5.4
Q = f~z : 9u9y(~z ; y; u) 2 Rg = f~z : 9v(~z ; (v)0 ; (v)1 ) 2 Rg
for some recursive relation R which by the closure properties for recursive relations
and 3.5.4 entails that Q is also r.e. Because
9x  z((~z ; x) 2 P) , 9x(x  z ^ (~z; x) 2 P)
closure under unbounded 9-quanti cation together with closure under ^ entails closure
under bounded 9-quanti cation.
Theorem 3.5.9. A function f : INn P! IN is partial recursive i its graph Gf =
f(~z; y) : f(~z ) ' yg is r.e.
140 III. Theory of Decidability
Proof. We have for partial recursive f
(~z; y) 2 Gf , f(~z ) ' y
, 9u(T n(pf q;~z; u) ^ U(u) = y ^ 8y < u:T n(pf q;~z; y)):
The relation in parentheses is obviously recursive (even primitive recursive). Thus Gf
is r.e. by 3.5.4. For the opposite direction let
Gf = f(~z; y) : 9uR(~z; y; u)g
for a recursive relation R: We claim that
f(~z ) ' (uR(~z; (u)0; (u)1))0 :
If it is f(~z ) ", then we have 8u:R(~z; y; u): Thus uR(~z; (u)0; (u)1) " : If f(~z ) #, then
9!y9v(R(~z ; y; v)): Take the least such v and put u = hy; vi: Then
f(x) ' y ' (uR(~z; (u)0; (u)1))0 :

Now we have the means to give a second proof to show that there are universal partial
recursive functions.
Corollary 3.5.10. The function n : INn+1 P! IN de ned by n(e; ~z) ' fegn(~z) is
partial recursive.
Proof. For the graph of the function n we have n(e; ~z) ' y i fegn(~z) ' y which
means
9u(T n(e; ~z ; u) ^ 8z < u:T n(e; ~z; z) ^ U(u) = y):
So Gn is r.e. and by 3.5.9 n is partial recursive.
We close this section by a characterisation of recursive relations which is due to
E. L. Post (1943), S. C. Kleene (1943) and A. Mostowski (1947).
Theorem 3.5.11 (Post's theorem). A relation R is recursive i both R and :R
are r.e.
Proof. For the easy direction let R be recursive. Then R and :R are recursive and
hence also r.e. by 3.5.5. For the opposite direction let R and :R be both r.e. Then we
have recursive relations P1 and P2 such that
~x 2 R , 9u((~x; u) 2 P1 )
and
~x 2= R , 9v((~x; v) 2 P2 ):
Let
f(~x) ' z((~x; z) 2 P1 _ (~x; z) 2 P2 ):
3.5. Recursive, Semi-recursive and Recursively Enumerable Relations 141
Then f is partial recursive and obviously also total. Hence f is recursive. We claim
~x 2 R , (~x; f(~x)) 2 P1
which by the closure of recursive relations under recursive substitutions entails the
recursiveness of R: To prove the claim we observe
~x 2 R ) 9u((~x; u) 2 P1) ^ 8u((~x; u) 2= P2 ):
Thus (~x; f(~x)) 2 P1 and we have the direction from left to right. Conversely if ~x 2= R,
then 8u((~x; u) 2= P1 ) which implies (~x; f(~x)) 2= P1 :
Post's theorem is easy to visualise if we think in terms of decidability and positive
decidability. If R is decidable, then :R is and thus both are positively decidable.
On the other hand if R and :R are positive decidable we simultaneously apply the
algorithms which decide ~x 2 R and ~x 2= R positively. Since either ~x 2 R or ~x 2= R we
either get an answer assuring ~x 2 R or ~x 2= R: This, however, decides ~x 2 R:
Corollary 3.5.12. A (total) function f : INn ! IN is recursive i its graph is recursive.
Proof. If f is recursive, then Gf is r.e. by Theorem 3.5.9. Because
(~z; y) 2= Gf , 9x(f(~z ) = x ^ y 6= x)
, 9x((~z; x) 2 Gf ^ y =6 x)
:Gf is r.e., too. So by 3.5.11 Gf is recursive. The second direction follows directly by
3.5.9.
Finally we show that the classes of recursive and r.e. relations are indeed distinct.
The proof is based on a diagonalisation argument and was mentioned by E. L. Post
(1922), K. Godel (1931) and S. C. Kleene (1936)
Theorem 3.5.13. There is a r.e. relation which is not recursive, namely
K = fx : 9zT 1 (x; x; z)g = fx : x 2 Wx1 g:
Proof. K is r.e. by 3.5.4. Towards a contradiction assume that K is recursive. Then
:K is recursive, too, and there is an e 2 IN such that :K = We1 and we obtain
e 2= K , e 2 We1 , 9yT 1 (e; e; y) , e 2 K:
Thus K cannot be recursive.
Now we close this section by reviewing the connection between the subsets (unary
relations) of IN we have studied up to now.
142 III. Theory of Decidability
ZZ 
Z
: ZZ  Xy
 
 ZZ XX
 
r.e. complements of r.e.
 ZZ
  ZZ recursive
  Z
primitive recursive
We learned that this picture is correct, e.g. by Post's theorem we know that the
recursive subsets of IN are just those subsets which are r.e. and complements of r.e. sets.

Exercises
E 3.5.1. Prove that the class of r.e. relations is not closed under unbounded universal
quanti cation.
E 3.5.2. (S. C. Kleene, 1936) Let R  IN be an in nite set. Prove:
a) R is r.e. i R is range of a one-one recursive function.
b) R is recursive i R is range of a strictly increasing recursive function.
E 3.5.3.
a) There is an n 2 IN with Wn1 = fng.
b) For any recursive function f there is an n 2 IN with Wf1(n) = Wn1 :
E 3.5.4. Let R  IN2 be r.e. Is the function f : IN P! IN de ned by
f(x) ' yR(x; y)
partial recursive?
E 3.5.5. (S. C. Kleene, 1936) Prove that for a r.e. relation P  INn+1 there is a
partial recursive function f : INn P! IN such that
9yP(~x; y) , f(~x) # ^P(~x; f(~x)):
3.6 Rice's Theorem
Up to now we have been mainly concerned with recognising sets to be recursive. This
section is devoted to a theorem which may help us to see that many explicitly given
sets are not recursive. It is due to H. Rice in 1953. The idea behind that theorem
3.6. Rice's Theorem 143
is that it is only positively decidable if two partial recursive functions fe0 g; fe1g are
extensionally di erent, i.e. if it is
9x fe0 g(x) 6' fe1g(x):
But now we have to observe the following closure property of partial recursive functions.
Proposition 3.6.1 (De nition by cases). If P1; : : : ; Pn are pairwise disjoint r.e.
relations and g1; : : : ; gn are partial recursive functions, then the function f de ned by
8
>
> g1 (~z) if P1(~z)
<... ..
.
f(~z ) ' >
> g (~z ) if Pn(~z)
:"n
>
otherwise
is partial recursive.
Proof. Using Theorem 3.5.9 we only have to prove that Gf is r.e. It is
(~z; y) 2 Gf , f(~z ) ' y
, (P1(~z ) ^ g1(~z ) ' y) _ : : : _ (Pn(~z) ^ gn(~z) ' y):
This representation shows that Gf is r.e. since Ggi is r.e. by Theorem 3.5.9.
Theorem 3.6.2 (Rice's theorem). Let F be a nonempty set of n-ary partial recur-
sive functions such that F =
6 ffegn : e 2 CP ^ (e)1 = ng: Then the set
fe : fegn 2 Fg
is not recursive.
Proof. Let M = fe : fegn 2 Fg: To obtain a contradiction assume that M is recursive.
Choose f 2 F and g 2= F but g n-ary partial recursive. Now de ne h : INn+1 P! IN
by (
h(e; ~z) ' g(~z ) if e 2 M
f(~z ) if e 2= M:
Since M and :M are r.e. h is partial recursive by Proposition 3.6.1. Thus using the
recursion theorem there is an index e0 2 IN, such that
8~z h(e0 ;~z) ' fe0 gn (~z):
Now we have
1. e0 2 M ) fe0 gn(~z) ' h(e0 ;~z) ' g(~z); so fe0gn = g 2= F ; i.e. e0 2= M:
2. e0 2= M ) fegn(~z ) ' h(e0 ;~z) ' f(~z ); so fe0gn = f 2 F ; i.e. e0 2 M:
In both cases we have yielded a contradiction.
144 III. Theory of Decidability
Corollary 3.6.3. Let f be partial recursive. Then the set of indices of f, i.e. the set
fe : fegn = f g, is not recursive and thus not nite.
Corollary 3.6.4. The set of indices of all recursive functions is not recursive.
This result can be improved in the following fashion.
Theorem 3.6.5. The set of indices of all recursive functions is not r.e.
Proof. Let T ot = fe : feg is a recursive functiong. Assume that Tot is r.e. Then
there is a recursive function f, such that Tot = rg(f): Now de ne g : IN P! IN by
g(x) ' ff(x)g1 (x) + 1: Then g is total, since f(n) 2 Tot and so ff(n)g1 is total.
Therefore g has an index in Tot, i.e. there is an e 2 IN such that g = ff(e)g1 : Now we
have
ff(e)g1 (e) = g(e) = ff(e)g1 (e) + 1:

Exercises
E 3.6.1. Prove that Part = fx : Wx1 6= INg is not r.e.
Hint: Construct (using the Snm -theorem) a primitive recursive function f with x 2=
K , f(x) 2 Part; where K  IN was de ned in Theorem 3.5.13.
E 3.6.2. De ne Inf = fx = Wx1 is in niteg. Prove:
a) Inf is not recursive.
b) Inf is not r.e.
Hint: De ne a recursive function f with
x 2 Tot , f(x) 2 Inf
where T ot was de ned in the proof of Theorem 3.6.5
E 3.6.3. De ne: R  IN is extensional if for all e0 ; e1 2 IN one has
e0 2 R ^ fe0 g1 = fe1 g1 ) e1 2 R:
a) Let ; 6= R 6= IN be extensional. Prove that there is either a recursive function
f : IN ! IN with
K = fx : f(x) 2 Rg
or a recursion function f : IN ! IN with
:K = fx : f(x) 2 Rg:
Hint: Let e0 2 IN with 8xfe0 g(x) " : If e0 2= R then choose e1 2 R: Observe the
function (
g(x; y) ' fe1 g(y) if x 2 K
" otherwise.
For e0 2 R the proof runs similar.
b) Use part a) to prove Theorem 3.6.2.
3.7. Random Access Machines 145
3.7 Random Access Machines
Informally a random access machine is a computing device whose hardware consists of
a memory which is arranged in successively numbered ( nitely many) registers. Each
register is capable of storing an arbitrarily large number. The basic instructions are
INC(r) increase the content of register number r by one,
DEC(r) decrease the content of register number r by one,
BEQ(r) check whether the content of register number r is zero.
Formally we may de ne a random access machine with n + 1 registers as a tuple
(INn+1 ; fINC(r); DEC(r) : r  ng; fBEQ(r) : r  ng)
where
INC(r) : INn+1 ! INn+1
INC(r) (z0 ; : : : ; zr ; : : : ; zn) = (z0 ; : : : ; zr + 1; : : : ; zn )

## DEC(r) : INn+1 ! INn+1

DEC(r) (z0 ; : : : ; zr ; : : : ; zn ) = (z0 ; : : : ; zr _ 1; : : : ; zn)

## BEQ(r) : INn+1 ! f0; 1g

(
BEQ(r) (z0 ; : : : ; zr ; : : : ; zn ) = 1 if zr = 0
0 otherwise
An instruction for a random access machine is a tuple
(k; inst; l) where k; l 2 IN; inst 2 fINC(r); DEC(r) : r  ng:
(k; inst; l; m) where k; l; m 2 IN and inst 2 fBEQ(r) : r  ng:
We call k the identi cation mark and l as well as m the transition marks of an in-
struction. A programme for a random access machine is a nite set P of instructions
together with a distinguished identi cation mark, called the start mark , such that
every instruction in P is uniquely determined by its identi cation mark.
The stop marks of a programme P are those transition marks which are not iden-
ti cation marks. Identi cation and transition marks are subsumed as marks . By
MP we denote the sets of marks of a programme P: The transition function for a
programme P is the map
P : MP  INn+1 ! MP  INn+1
146 III. Theory of Decidability
which is de ned by distinguishing the following cases
8
>
> (l; (z0 ; : : : ; zr + 1; : : : ; zn)) if (k; INC(r); l) 2 P
>
>(l; (z0 ; : : : ; zr _ 1; : : : ; zn)) if (k; DEC(r); l) 2 P
>
> if (k; BEQ(r); l; m) 2 P
<(l; (z0 ; : : : ; zr ; : : : ; zn))
P (k; (z0 ; : : : ; zn )) = > and zr = 0
>
>(m; (z0 ; : : : ; zr ; : : : ; zn)) if (k; BEQ(r); l; m) 2 P
>
>
> and zr 6= 0
:(k; (z ; : : : ; z ; : : : ; z )) otherwise.
0 r n
We call a function f : INn ! INm primitive recursive, partial recursive, recursive if the
function
hf i =<> f; i.e. hf i(z1 ; : : : ; zn) = hf(z1 ; : : :zn )i
is.
Proposition 3.7.1. The transition function P : INn+2 ! INn+2 is primitive recur-
sive.
Proof. This is obvious by the de nition since every nite set is primitive recursive.
This will be made more explicit in the exercises.
We obtain the iterated transition function
iP : IN  INn+1 ! MP  INn+1
by
iP (0;~z) = (k;~z)
where k is the start mark of the programme P
iP (n + 1;~z) = P (iP (n;~z)):
Thus iP (n;~z) is the n-fold application of the transition function to the start con g-
uration (k;~z):
Proposition 3.7.2. The iterated transition function is primitive recursive.
The iterated transition function simulates the computation of the computing device
under a given programme P: Thus if we have the input ~z for a random access machine
iP (n;~z) computes the actual mark in the programme P and the content of the
registers after n steps. This should be done until a stop mark is reached. The result
of that computation is available in the register number 0, which is (hiP (n;~z)i)1 if
(hiP (n;~z)i)0 is the stop mark. Thus we de ne
De nition 3.7.3. A function f : INn P! IN is random access machine (RAM) com-
putable if there is a programme P such that
f(~z ) ' (iP (n((iP (n;~z))0 is stop mark of P);~z)1:
3.7. Random Access Machines 147
Because we have only nitely many stop marks in a given programme P we obtain:
Lemma 3.7.4. Every RAM computable function is partial recursive.
In the exercises we will prove the converse direction. So we have the following charac-
terisation of the partial recursive functions.
Theorem 3.7.5. The partial recursive functions are just the RAM computable func-
tions.
At this point we have been confronted with a universal method to handle any kind
of algorithmical computability. In the exercises we will code up the instructions of
any given algorithm. Then we will simulate the algorithm by a recursive function.
These facts give reason for the so-called Church's thesis, formulated by A. Church
and A. M. Turing in 1936.
Church's thesis: every e ectively computable function is recursive.
Exercises
E 3.7.1. Prove the following claims:
a) The functions Ckn; Pkn; S are RAM computable.
b) If g : INn P! IN and h1; : : : ; hn : INm P! IN are RAM computable, then
Sub(g; h1; : : : ; hn) is.
c) If g : INn P! IN and h : INn+2 P! IN are RAM computable, then R(g; h) is.
d) If g : INn+1 P! IN is RAM computable, then g is.
e) Every partial recursive function is RAM computable.
E 3.7.2.
a) Code the instructions of a random access machine into a set of natural numbers.
b) Determine a primitive recursive relation Prg  IN such that
Prg(e) , e codes a programme for a random access machine
c) Determine a primitive recursive relation End  IN2, such that
End(e; y) ,e codes a programme P and y 2 dom(P )
with (y)0 is stop mark
d) Prove that the function  : INn+3 ! INn+2 , with (e; ~x) = P (~x), if e codes the
programme P, is primitive recursive.
148 III. Theory of Decidability
3.8 Undecidability of First Order Logic
By the completeness theorem for rst order logic we have
j= F i ` F:
The relation ` F is positively decidable because we have an algorithm producing all
formulas F with ` F : in a rst step we take all axioms of ` (it can be decided if F is an
axiom or not) and then use the rules of ` to derive new formulas out of the produced
ones. There it is decidable if we have the premise of a given rule. The question at this
point is: is ` F decidable?
Proposition 3.8.1. P  INn is semi-recursive i there is a programme P with
~x 2 P , 9n(iP (n;~x)0 is stop mark of P):
Proof. Let P = dom(f) with f partial recursive. Now let P be the programme
computing f (cf. 3.7.3 and 3.7.5). Then we have
~x 2 P , ~x 2 dom(f) , 9n(iP (n;~x)0 is stop mark of P):
Now we will come to a famous result by Alonzo Church and Alan M. Turing
(1936): the unsolvability of the `Entscheidungsproblem': it shows that logical truth
(in rst order logic) is undecidable.
Theorem 3.8.2 (Church's theorem). The validity of formulas of rst order logic
is undecidable.
Proof. We will use the following strategy: Take P  INk recursively enumerable but
not recursive. Using Proposition 3.8.1 we have a programme P with P = dom(P):
Now de ne FP;~x with:
~x 2 dom(P) ,j= FP;~x :
If j= F is decidable, so would be ~x 2 dom(P): This is a contradiction to: P is not
recursive.
Now we are going to construct the formulas FP;~x : Let P be a programme for a
random access machine with m registers. Then de ne L to be the rst order language
with identity containing
 a predicate symbol R with #R = m + 2; which should be read to simulate the
computation, i.e.
R(n; s; ~r) , iP (n;~x) = (s; ~r)
 a binary predicate symbol <, which is a symbol for the natural order on IN
 an unary function symbol S for the successor function on IN
 a constant symbol 0 for the least element of IN:
3.8. Undecidability of First Order Logic 149
Now de ne
F0 =8x8y8z(x < y ^ y < z ! x < z) ^
8x(:x < x) ^
8x8y(x < y _ x = y _ y < x) ^
8x(0 < x _ 0 = x) ^
8x(x < Sx) ^
8x8y(Sx = Sy ! x = y) ^
8x8y(x < y ! Sx < y _ Sx = y):
and for n 2 IN de ne n + 1 = Sn: For any instruction I 2 P de ne a formula FI by:
1. If I = (k; INC(i); l), then
FI = 8x8r1 : : : 8rm (R(x; k; r1; : : : ; rm ) ! R(Sx; l; r1; : : : ; Srx ; : : : ; rm))
2. If I = (k; DEC(i); l), then
FI = 8x8r1 : : : 8rm (R(x; k; r1; : : : ; rm ) ! [(ri = 0 ^ R(Sx; l; r1; : : :rm ))
_ (:ri = 0 ^ 9y(ri = Sy ^ R(Sx; l; r1; : : : ; y; : : : ; rm )))])
3. If I = (k; BEQ(i); l1 ; l2 ), then
FI = 8x8r1 : : : 8rm (R(x; k; r1; : : : ; rm ) ! [(ri = 0 ^
R(Sx; l2 ; r1; : : : ; rm )) _ (:ri = 0 ^ R(Sx; l1 ; r1; : : : ; rm ))]
Now let ^
FP;~x = F0 ^ R(0; k; x1; : : : ; xm) ^ FI ;
I 2P
where k is the start mark of P: To prove the theorem we just have to show the following
lemma.
Lemma 3.8.3. If e1 ; : : : ; ek are the stop marks of the programme P, then it is equiv-
alent:
1: 9n(iP (n;~x)0 is stop mark of P)
2: j= FP;~x ! 9x9r1 : : : 9rm (R(x; e1 ; r1; : : : ; rm ) _ : : : _ R(x; ek; r1; : : : ; rm ))
Proof.
2: ) 1: This is easy if we take the intended interpretation over the natural numbers,
i.e. R is interpreted by
f(n; s; r1; : : : ; rm ) : iP (n;~x) = (s; r1; : : : ; rm)g:
150 III. Theory of Decidability
1: ) 2: Let S be an L-structure with S j= FP;~x : Now prove
iP (n;~x) = (s; r1 ; : : : ; rm) ) S j= R(n; s; r1; : : : ; rm)
by induction on n: For n = 0 we have iP (0;~x) = (k;~x) and because of S j=
FP;~x we have
S j= R(0; k; x1; : : : ; xm):
For the induction step we have iP (n + 1;~x) = P (iP (n;~x)) with
iP (n;~x) = (~s; r~1; : : : ; r~m )
and iP (n + 1;~x) = (s; r1; : : : ; rm ): By the induction hypothesis we obtain
S j= R(n; s~; r~1; : : : ; r~m )
and because of the construction of FP;~x it follows
S j= R(n + 1; s; r1; : : : ; rm):
So we have: is iP (n;~x) = (ei ; r1; : : : ; rm ) for i 2 f1; : : : ; kg it is
S j= R(n; ei; r1; : : : ; rm)
So we have proved 2.

## By the proof of Lemma 3.8.3 we have proved Theorem 3.8.2, too.

Though `decidable' is not mathematically precise, we use the method mentioned at
the end of section 3.7, where we tried to explain Church's thesis.
If we intended to be more exact in formulating Theorem 3.8.2 we would have to make
a Godelisation of the language L as follows: de ne
 pxiq = h0; ii for a variable xi
 p0q = h1; 0i for the constant symbol 0
 if t is a term let pStq = h2; ptqi (inductively)
 formulas are coded in the following way:
pt1 = t2 q = h3; pt1q; pt2qi
pt1 = t2 q = h4; pt1q; pt2qi
pRt1 : : :tm+2 q = h5; pt1q; : : : ; ptm+2 qi
p:F q = h6; pF qi
pF ^ Gq = h7; pF q; pGqi
p8xi F q = h8; pxiq; pF qi:
3.8. Undecidability of First Order Logic 151
Now we call a set of formulas decidable of fpF q : F 2 g is a recursive set of natural
numbers. Then Theorem 3.8.2 runs as follows:
The set M of valid formulas is not decidable.
For the proof one has (cf. the proof of 3.8.2)
~x 2 P , pFP;~x q 2 M:
One can show now that
f : INn ! IN; f(~x) = pFP;~x q

## is a primitive recursive function. If M was recursive, so would be P which is a contra-

diction.

Exercises
E 3.8.1. Prove that the set
V al = fpF q : F is valid in all structures g  IN
is r.e.
E 3.8.2. De ne Satfin = fpF q : F is valid in some nite structureg.
Prove:
a) Satfin is r.e.
Hint: Show that without loss of generality L is nite observing only non-isomor-
phic structures.
b) Satfin is not recursive
Hint: De ne FP;~x as in the proof of Theorem 3.8.2 but with x < Sx only for
those x which are not maximal w.r.t. < : For the content of the registers one
needs r < Sr:
E 3.8.3. Prove that the set
V alfin = fpF q : F is valid in all nite structures g
is not r.e.
152 III. Theory of Decidability
Chapter 4
Axiom Systems for the Natural Numbers

The topic of this section is the description of the structure of natural numbers within
the frame of rst order logic. It will turn out that this structure cannot be categori-
cally described and that there are very interesting phenomena to be observed. These
phenomena incorporate a serious limit to the expressional power of rst order axiom
systems and are known as Kurt Godel's incompleteness theorems.

## 4.1 Peano Arithmetic

The non-logical symbols of the language LPA of Peano Arithmetic are the constant
symbol 0 (for zero), the binary symbols +;  (for addition and multiplication) and an
unary function symbol S (for the successor function). Moreover we have the equality
symbol `=' as predicate symbol. Thus we regard LPA as a language with identity. The
basic axioms of Peano Arithmetic are the de ning axioms for these symbols. These
are
(S0) 8x(:S(x) = 0)
(SS) 8x8y(S(x) = S(y) ! x = y)
(+0) 8x(x+0 = x)
(+S) 8x8y(x+S(y) = S(x+y))
(0) 8x(x0 = 0)
(S) 8x8y(xS(y) = xy+x)
The induction scheme is
(IND) Fx (0) ^ 8x(F ! Fx (S(x))) ! 8xF:
The axiom system PA is the set of basic axioms together with the induction scheme
where F ranges over LPA -formulas. Our rst observation is that PA is not categorical.
This is because the standard structure N with domain IN is an in nite model of PA
(cf. Exercise E 2.2.9). But we are able to strengthen this result.
154 IV. Axiom Systems for the Natural Numbers
Theorem 4.1.1. The axiom system PA is not @0-categorical.
Proof. Let N = (IN; 0; +; ; x:x + 1) be the standard structure of natural numbers.
It is card(IN) = @0 : We de ned PA in such a way that we have
N j= PA:
By a now familiar compactness argument we construct a model of PA which is not
isomorphic to N : For n 2 IN let n denote the term S n (0); i.e. S 0 (0) = 0 and S n+1 (0) =
S(S n (0)) and now let
T = PA [ fc 6= n : n 2 INg
for a new constant symbol c: Then nN = n and we get a model N of T by the
compactness theorem with card(N ) = @0 . But then cN 6= nN holds for all n 2 IN: For
any homomorphic map f : N ! N we show f(nN ) = n by induction on n. For n = 0
we have f(0N ) = 0N = 0 and for n = m + 1 we get
f(nN ) = f(S N mN ) = S N (f(mN )) =i:h: S N (m) = n:
Thus f(cN ) 2= IN which shows that N cannot be an epimorphic image of N :
We borrow the result of recursion theory that every primitive recursive function and
relation is de nable by an LPA -formula. We may thus regard the language LPR which
contains a function/predicate symbol for every primitive recursive function/relation as
an extension of PA by de nitions. Let us denote the theory T(LPR ) by NT called
number theory. Then we obtain NT as a conservative extension of PA: We will rather
work with NT instead of PA:
Theorem 4.1.2. Let A be an NT model satisfying
(Ind) B  A; 0A 2 B and 8a2B(S A a 2 B) ) B = A:
Then A 
= N (where N is the standard LPR -structure with domain IN).
Proof. We de ne an embedding f : IN ! A by
f(0) = 0A;
f(n + 1) = S A (f(n));
i.e. f(k) = kA . We prove the following claims.
1. f is onto.
Let B = rg(f): Then B 3 f(0) = 0A and for a 2 B there is an n 2 IN such that
a = f(n): Hence
S A (a) = S A (f(n)) = f(n + 1) 2 B:
By (IND) we thus get B = A which proves 1.
4.1. Peano Arithmetic 155
2. NT j= 8x(x = 0 _ 9y(x = Sy))
Let F be the formula x = 0_9y(x = Sy): Then trivially NT j= Fx(0): If NT j= F,
then obviously also NT j= Fx(Sx): By axiom (IND) follows NT j= 8xF: This
proves 2.
3. f(n) = 0A i n = 0
The direction from right to left holds by de nition. For the opposite direction
let n 6= 0: Thus n = m + 1 for some m and we get f(n) = S A (f(m)): Thus
f(n) 6= 0A by axiom (S0):
4. f is one-one
We prove f(n) = f(m) ) n = m by induction on m: For m = 0 this follows by
3. Let m = k + 1: Then f(n) = f(k + 1) = S A (f(k)): Thus n 6= 0 by 3. Let
n = l + 1: Then f(n) = S A (f(l)): By axiom (SI) it follows f(k) = f(l) and this
entails k = l by induction hypothesis. Hence m = k + 1 = l + 1 = n:
It remains to show that f is indeed homomorphic, i.e. that
(4.1) f(F(k1; : : : ; kn)) = F A (f(k1 ); : : : ; f(kn))
and
(4.2) R(k1; : : : ; kn) , RA (f(k1 ); : : : ; f(kn ))
holds for any primitive recursive function F and - predicate R: We prove (4.1) by
induction on the de nition of `F is a primitive recursive function'.
If F = Ckn , then
f(F(k1; : : : ; kn)) = f(k) = kA = F A(f(k1 ); : : : ; f(kn )):
If F = Pkn, then
f(F(k1 ; : : :kn)) = f(kk ) = kk A = F A(k1 ; : : : ; kn):
If F = S, then f(Sk) = S A (fk) by de nition.
If F = Sub(G; H1; : : : ; Hm), then
f(F(k1; : : : ; kn)) = f(G(H1 (~k); : : : ; Hm (~k)))
=i:h: GA (f(H1 (~k)); : : : ; f(Hm (~k))
=i:h: GA (H1A (f(k1 ); : : : ; f(kn)); : : : ; HmA(f(k1 ); : : : ; f(kn)))
= Sub(GA; H1A; : : : ; HmA)(f(k1 ); : : : ; f(kn ))
= F A (f(k1 ); : : : ; f(kn)):
Finally we assume F = R(G; H) and prove
f(F(k1 ; : : : ; kn; z)) = F A(f(k1 ); : : : ; f(kn ); f(z))
156 IV. Axiom Systems for the Natural Numbers
by side induction on z: For z = 0 we have
f(F(~k ; z)) = f(G(~k ))
=i:h: GA(f(k1 ); : : : ; f(kn ))
= F A(f(k1 ); : : : ; f(kn ); f(0)):
For z = u + 1 it follows
f(F(~k; z)) = f(H(~k ; u; F(~k; u)))
=i:h: H A ((f(k1 ); : : : ; f(kn )); fu; f(F(~k; u)))
=s:i:h: H A((f(k1 ); : : : ; f(kn)); fu; F A((f(k1 ); : : : ; f(kn )); fn))
= F A((f(k1 ); : : : ; f(kn)); S A (fu)) = F A((f(k1 ); : : : ; f(kn )); f(z)):
This proves (4.1) and (4.2) follows from (4.1) by
R(k1; : : : ; kn) , R (k1; : : : ; kn) = 1
, f(R (k1; : : : ; kn)) = f(1)
, AR (f(k1 ); : : : ; f(kn )) = 1A
, (f(k1 ); : : : ; f(kn)) 2 RA :
De nition 4.1.3. Let N1 and N2 be NT models with N1  N2. N2 is an end
extension of N1 if
b 2 N1 ^ a <N b ) a 2 N1 :
2

## Theorem 4.1.4. Up to isomorphism, every NT model is an end extension of N :

Proof. Let S j= NT: Put
C = fA  S : 0S 2 A ^ 8a2A(S S (a) 2 A)g
T
and N = C : This is another possibility to say that the set N is de ned inductively
by the clauses
(i) 0S 2 N
(ii) a 2 N ! S S (a) 2 N
i.e. N is the standard part of S . Then
1. N 2 C
For any A 2 C we have 0S 2 A: Hence 0S 2 N: If a 2 N, then a 2 A for all
A 2 C : Hence S S a 2 A for all A 2 C and thus also S S a 2 N: But this means
N 2 C:
2. A = S  N satis es (Ind):
Let B  N; such that 0S 2 B and B is closed under S A = S S  A: Then B 2 C
which entails N  B: Hence B = N:
4.1. Peano Arithmetic 157
3. b 2 N ^ a <S b ) a 2 N:
Let B = fb 2 N : 8a <S b (a 2 N)g: Since S j= :9x(0 > x) we have 0S 2 B: If
b 2 B and a <S S S b: Then S S b = S A b and a S b: If a <S b, then a 2 N since
b 2 B: If a = b, then b 2 N because B  N: Thus S A b 2 B which by 2. entails
B = N: This proves 3.
4. A j= NT
A j= IND follows by 2. So we only have to show that A satis es the de ning
equations for primitive recursive functions. Since A is a substructure of S which
does satisfy all these equations, this boils down to showing the closure of N under
primitive recursive functions. By induction on k we have
5. kS 2 N for all k 2 IN
0S 2 N is obvious and kS 2 N entails (k + 1)S = S S (kS ) 2 N since N 2 C :
Thus let ~z 2 N n: We show F S (~z) 2 N by induction on the length of F:
Ckn (~z)S = kS 2 N by 5.
Pkn(~z )S = zk 2 N by hypothesis
S(z)S 2 N since N 2 C
For F = Sub(G; H1; : : : ; Hm) we get
F S (~z) = GS (H1S (~z); : : : ; HmS (~z)):
By induction hypothesis we have
(H1S (~z); : : : ; HmS (~z)) 2 N S
and thus also GS (H1S (~z); : : : ; HmS (~z)) 2 N:
Finally let F = R(G; H): We prove F S (~z; k) 2 N by induction on k using 2. It is
F S (~z ; 0) = GS (~z) 2 N
by induction hypothesis and
F S (~z ; S S k) = H S (~z ; k; F S (~z ; k)):
Then F S (~z ; k) 2 N by the hypothesis for 2. and this entails
H S (~z ; k; F S (~z ; k))
by the induction hypothesis. From 2. and 4. we get by 4.1.2 A 
= N : Thus S end-
extends N by 3.
158 IV. Axiom Systems for the Natural Numbers
Theorem 4.1.5. Th(N ) is undecidable.
Proof. Towards a contradiction we assume the decidability of Th(N ): But then
K = fe : N j= 9nT 1 (e; e; n)g;
where T 1 is the symbol for the Kleene predicate de ned in section 3.3 is decidable.
Therefore Th(N ) is a complete but undecidable theory.
Theorem 4.1.6. Th(N ) is not recursively enumerably axiomatizable.
Proof. Assume Th(N ) is r.e. axiomatizable. Then the predicate F 2 Th(N ) is
r.e. (cf. Exercise E 4.1.1). Because Th(N ) is complete we have
F 2= Th(N ) , :F 2 Th(N ):
This shows that : Th(N ) is r.e., too. But then Th(N ) is recursive, i.e. decidable. This
As an immediate corollary we obtain a theorem of J. Barkley Rosser (1936).
Theorem 4.1.7 (Rosser's theorem). No recursively enumerably axiomatizable the-
ory is complete for N :
Corollary 4.1.8. PA and NT are incomplete for N :

Exercises
E 4.1.1. Let T be a r.e. theory, i.e. the set fpF q : F 2 T g is r.e. Then it is
fpF q : T j= F g
a r.e. set. (For the notion pF q cf. section 4.2.)

## 4.2 Godel's Theorems

In this section we are going to give a strengthening of Corollary 4.1.8. We will show
that NT is not only incomplete but also !-incomplete (cf. the next de nition). For
that reason we are going to examine the limits of provability in rst order logic: Kurt
Godel's incompleteness theorems.
De nition 4.2.1. Let LT be a language extending LNT and T an LT -theory.
a) T is !-consistent if for all n 2 IN T ` Fx (n) implies
T 6` :8xF:
4.2. Godel's Theorems 159
b) T is !-inconsistent if there is a sentence 8xF such that for all n 2 IN
T ` Fx(n) but T ` :8xF
c) T is !-complete if for all n 2 IN T ` Fx(n) implies
T ` 8xF:
d) T is !-incomplete if there is a sentence 8xF , such that
T 6` 8xF and T ` Fx (n) for every n 2 IN:
Proposition 4.2.2.
a) If T is consistent and !-complete, then T is !-consistent.
b) !-inconsistency does not imply inconsistency.
c) If N j= T, then T is !-consistent.
Proof.
a) Let F be given such that
T ` Fx(n) for all n 2 IN:
Since T is !-complete we have
T ` 8xF:
Because T is not inconsistent this implies T 6` :8xF:
b) Take T = NT + fn 6= c : n 2 INg for a new constant symbol c: Using the
usual compactness argument T is consistent. So it suces to show that T is
!-inconsistent. But this is easy since
T ` n 6= c for all n 2 IN
and
T ` 9x(x = c):
c) If we have T ` :8xF it follows N j= :8xF: Therefore it is
N j= :Fx (n)
for some n 2 IN:
Corollary 4.2.3. NT is !-consistent.
Proof. This is easy since N j= NT:
Now we are going to prove Godel's theorems, which are stated here in a preliminary
version for certain extensions T of NT:
160 IV. Axiom Systems for the Natural Numbers
First Incompleteness Theorem There is an LNT -sentence G1 such that
1. T 6` G1; especially N j= G1; cause G1 states its unprovability.
2. T 6` :G1;
This result is based on the well-known liar antinomy. Think of the sentence G1 stating
`I'm unprovable in T':
If T ` G1, then G1 is true and thus not provable. So we have T 6` G1: This result
could be sharpened in the following way. Therefore think of a sentence
con(T )
in the language of number theory stating that T is consistent.
Second Incompleteness Theorem T 6` con(T)
In the rest of this section we are going to prove this results. Because we will not be too
technical, we restrict ourselves here only for stating some facts, which will be proved
extensively in the appendix. Because of the waste of energy we do not distinguish
between functions and symbols for them, between constants and symbols for them and
between predicates and predicate symbols. Besides we are going to restrict ourselves
to extensions T of NT which are formulated in LNT : Now we state the facts used for
the incompleteness theorems:
I. There is a simple (primitive recursive) coding (also called arithmetisation) of the
language LNT ; i.e. we have for example primitive recursive relations Term  IN,
Fml  IN such that
Term(n) , n is code of a term
Fml(n) , n is code of a formula
We denote by ptq the code of the term t and by pF q the code of the formula F:
Using this arithmetisation we can x the extensions of NT which are observed here:
for the rest of this section let
T be a consistent primitive recursive extension of NT
i.e. a set of sentences such that fpF q : F 2 T n NT g is primitive recursive.
II. There are primitive recursive functions subn : INn+1 ! IN such that
subn (pF q; pt1q; : : : ; ptnq) = pFx ;::: ;xn (t1 ; : : : ; tn)q:
1

## III. There is a primitive recursive function N : IN ! IN such that

N(n) = pnq
where pnq is the code of the constant symbol n:
4.2. Godel's Theorems 161
IV. There is a de nition of provability, i.e. there is a binary primitive recursive pred-
icate ProofT such that ProofT (e; pF q) states that e codes an T-proof of F:
Setting
PrvblT (pF q) , 9xP roofT (x; pF q)
we will have
1. T ` F ) NT ` PrvblT (pF q) (Completeness)
2. if T is !-consistent and T ` PrvblT (pF q) ) T ` F (Soundness).
Being established once we can use these hypotheses to prove a result similar to the
recursion theorem in chapter 3.
Lemma 4.2.4 (Diagonalisation lemma). Let F be a formula with FV(F) = fx1g:
Then there is a sentence G such that
NT ` G \$ Fx (pGq):
1

## Proof. Let such F be given. De ne

H = Fx (sub1 (x1 ; N(x1))
1

e = pH q
G = Hx (e);
1

then we have
G \$ Hx (e)
1

\$ Fx (sub1(e; N(e)))
1

\$ Fx (sub1(pH q; peq))
1

\$ Fx (pHx (e)q)
1 1

\$ Fx (pGq):
1

Surely all this can be done in NT because we only use de ning equations and rst
order logic with identity. So the proof is terminated.
The diagonalisation lemma is the key lemma to incompleteness. Up to now we have
only used hypotheses I.{III. Hypothesis IV. is used in Godel's rst theorem, published
in 1931.
Theorem 4.2.5 (First incompleteness theorem). There is an LNT -sentence G1
(stating that it is not provable in T itself) such that
1: T 6` G1 (so G1 is true, i.e. N j= G1)
and if T is !-consistent, then
2: T 6` :G1:
162 IV. Axiom Systems for the Natural Numbers
Proof. Use Lemma 4.2.4 to obtain the formula G1 with
NT ` G1 \$ :PrvblT (pG1q):
Then we have T 6` G1 since otherwise by IV. 1.
NT ` PrvblT (pG1q)
which means NT ` :G1 and T ` G1: Contradiction, since T extends NT: Now, let T
be !-consistent and assume towards a contradiction
T ` :G1;
then by de nition of G1, i.e. since T extends NT
T ` G1 \$ :PrvblT (pG1q)
we obtain
T ` PrvblT (pG1q):
By IV.2. we obtain
T ` G1 ;
This gives the rst strengthening of Corollary 4.1.8.
Corollary 4.2.6. NT and PA are incomplete.
We now de ne
con(T) = :PrvblT (p0 = 1q):
Thus con(T ) expresses that T does not derive p0 = 1q: Together with pure logic this
entails that T cannot derive any false sentence. We have the following property.
Lemma 4.2.7. NT ` con(T) ) T is consistent.
Proof. If T is inconsistent, then T ` 0 = 1; since T proves any formula. By IV. 1.
NT ` PrvblT (p0 = 1q)
but since NT is consistent
NT 6` :PrvblT (p0 = 1q):
This was aimed.
To prove Godel's second theorem we just have to state some hypotheses which are
proven in the appendix. The rst one is just a restatement of IV.1.
(G1) T ` F ) NT ` PrvblT (pF q)
(G2) NT ` PrvblT (pF q) ^ PrvblT (pF ! Gq) ! PrvblT (pGq)
(G3) NT ` PrvblT (pF q) ! PrvblT (pPrvblT (pF q)q):
With these assumptions we prove the following key lemma.
4.2. Godel's Theorems 163
Lemma 4.2.8. NT ` G1 \$ con(T):
Proof. First we are going to prove the direction from left to right. Because
T ` 0 = 1 ! G1
we obtain by (G1) and (G2)
NT ` PrvblT (0 = 1) ! PrvblT (pG1q):
By the de nition of G1 we have
(4.3) NT ` G1 \$ :PrvblT (pG1q):
This entails
NT ` PrvblT (0 = 1) ! :G1
i.e.
NT ` G1 ! con(T):
T ` G1 ^ :G1 ! 0 = 1
which by (G1) and (G2) gives:
(4.4) NT ` PrvblT (pG1q) ^ PrvblT (p:G1q) ! :Con(T):
By (4.1) and (G1) we have
NT ` PrvblT (pPrvblT (pG1q) ! :G1q)
and by (G2) it follows
NT ` PrvblT (pPrvblT (pG1q)q) ! PrvblT (p:G1q):
Now (G3) is used to obtain
NT ` PrvblT (pG1q) ! PrvblT (p:G1q):
Using this and (4.2) we have
NT ` con(T) ! :PrvblT (G1 ):
By (4.1) this is
NT ` con(T) ! G1 :

Now we can prove Godel's second theorem which states that an adequate theory cannot
prove its own consistency. This theorem has also been published in 1931. It destroyed
the so-called Hilbert's programme which was initiated by David Hilbert. Aim of
this programme was that it should be possible to prove the consistency of any given
(mathematical) theory by nite means, i.e. means available in NT:
164 IV. Axiom Systems for the Natural Numbers
Theorem 4.2.9 (Second incompleteness theorem). T 6` con(T):
Proof. By the rst incompleteness theorem we have
T 6` G1
and by Lemma 4.2.8
T ` con(T ) ! G1 :
So we just have T 6` con(T ):
Using the facts which we needed for the proof of the second incompleteness theorem
we can prove a nice result now.
Theorem 4.2.10. NT is !-incomplete.
Proof. By the second incompleteness theorem we have
NT 6` con(NT);
i.e. we have
NT 6` 8x:Proof(x; p0 = 1q):
But we also have
NT ` :Proof(n; p0 = 1q)
by Theorem A.1.16 since NT is consistent and we have
NT 6` 0 = 1:

## Inspecting the proof of this theorem we obtain the following corollary.

Corollary 4.2.11. There is a quanti er free formula F, such that FV(F) = fxg and
NT ` Fx(n) for every n 2 IN but NT 6` 8xF:

Exercises
E 4.2.1. (A. Tarski) Prove that there is no LNT -formula Tr with FV(Tr) = fx1g
such that for all sentences F
NT ` F \$ Tr(pF q):
Chapter 5
Other Logics

In the rst four chapters of this book we have been only confronted with one kind of
logic:
rst order logic.
As it is for rst order logic a (general) logic is given by two aspects: its syntax and
its semantics. In this chapter we will give various modi cations of the syntax and
semantics of rst order logic. We learned that rst order logic has two important
features:
compactness and Lowenheim-Skolem
properties. These give reason for certain limitations of the expressive power of the rst
order logic. We will try to analyse these properties for other logics.

## 5.1 Many-Sorted Logic

In di erence to rst order logic we have di erent sorts of variables ranging over di erent
universes. For example if we want to treat vector spaces by a many-sorted logic we
will choose a two-sorted language where one sort of variables ranges over vectors and
the other over scalars. First we describe the syntax of many-sorted logic. Therefore
for the rest of this section we are going to x a set I, the set of universes.
De nition 5.1.1. The alphabet of a many-sorted language LI consists of
1. for every i 2 I countably many variables, denoted by xi; yi ; z i; : : :
2. a set of C of constant symbols, denoted by c; d; : : : For a constant symbol c 2 C
we have #c 2 I:
3. a set F of function symbols, denoted by f; g; h; : : : For a function symbol f 2 F
we have #f 2 I n+1 ; n  1:
4. a set P of predicate symbols, denoted by P; Q; R; : : : For a predicate symbol
P 2 P we have #P 2 I n ; n  1:
5. sentential connectives ^; : (and if you want _; !)
166 V. Other Logics
6. the quanti ers 8; 9
and auxiliary symbols.
This de nition generalises De nition 1.1.1 where it is assumed that I consists of one
single element. For c 2 C it is #c the number of the universe where the object named
by c lives. Here we have #f = (i1 ; : : : ; in; in+1 ) to intend the n-ary function which
is named by f maps objects of the universes numbered by i1 ; : : : ; in into the universe
with the number in+1 : And a predicate named by P with #P = (i1 ; : : : ; in ) is a subset
of the Cartesian product of the universes with numbers i1 ; : : : ; in :
The variables xi ; yi ; : : : will range over the universe numbered by i: LI is determined
by C ; F ; P (and in fact by I). So we will write LI (C ; F ; P ) when stressing this fact.
After these explaining words we de ne terms and formulas.
De nition 5.1.2. Let LI (C ; F ; P ) be a many-sorted language. We will de ne the
terms of sort i 2 I simultaneously by induction.
1. Every variable xi and every constant c with #c = i is a term of sort i.
2. If t1 ; : : : ; tn are terms of sort i1 ; : : :in , respectively, and f 2 F is a function
symbol with #f = (i1 ; : : : ; in; in+1 ), then (ft1 : : :tn) is a term of sort in+1 .
Simultaneous to this de nition we may have de ned the sets FV(t) and FVi (t) of
variables and variables of sort i occurring (free) in the term t. The exact formulation
of such kind of de nitions is left to the reader in future.
De nition 5.1.3. Let LI (C ; F ; P ) be a many-sorted language. We de ne the formulas
inductively.
1. If t1 ; : : : ; tn are terms of sort i1 ; : : : ; in, respectively, and P 2 P is a predicate
symbol with #P = (i1 ; : : : ; in), then (Pt1 : : :tn ) is a formula.
2. If F and G are formulas, then so are
(:F); (F ^ G):
3. If F is a formula and xi is a variable such that xi 2= BV(F); then 8xi F and 9xi F
are formulas.
This nishes the syntactical part of many-sorted logic. Now we turn to semantics,
i.e. we de ne LI -structures.
De nition 5.1.4. Let LI = LI (C ; F ; P ) be a many-sorted language. An LI -structure
is a tuple
S I = ((Si )i2I ; C ; F; P)
with the following properties:
1. For all i 2 I it is Si a non-empty set. It is called the universe of the sort i.
2. It is C = fcS I : c 2 Cg such that for #c = i it is cS I 2 Si :
5.1. Many-Sorted Logic 167
3. It is FI = ff S I : f 2 Fg such that if f 2 F with #f = (i1 ; : : : ; in ; in+1), then it
is f S a function I
f S : Si  : : :  Sin ! Sin :
1 +1

## 4. It isI P = fP S I : P 2 Pg such that if P 2 P with #P = (i1 ; : : : ; in ); then it is

P S a predicate
P S I  Si  : : :  Sin :
1

## To continue the development of semantics, we call a map

[
: VI ! Si
i2I
S
from the variables VI of a many-sorted language LI into the universes i2I Si of an
LI -structure S I = ((Si )i2F ; C ; F; P) an assignment, if it is for i 2 I
 Vi : Vi ! Si ;
i.e. the variables
I
of sort i are mapped into the universe Si . If we use this de nition
we S
t [] in the same manner as in De nition 1.3.2. We write S I instead of
S de ne
i2I Si :
Lemma 5.1.5. Let S I be an LI -structure and  an S I -assignment. Then it is tS [] 2
Si for any LI -term t of sort i.
Proof. By an induction on the de nition of t:
Now the same de nition of ValS I (F; ) as in section 1.1 works and we will write
S I j= F[]
instead of ValS I (F; ) = t. We can prove similar simple results about the semantics as
we have done for rst order logic. Here we give only one example.
Lemma 5.1.6.
a) S I j= 8xi F[] i SSI I j= Fxi (si )[] for all si 2 S i :
b) S I j= 9xi F[] i SSI I j= Fxi (si )[] for some si 2 S i :
This nishes the de nition of the semantics of LI . In the rest of this section we are
going to examine the expressive power of many-sorted logics. By the upper remarks it
comprises rst order logic, i.e. by restricting it only to one universe. But is many-sorted
logic `contained' in rst order logic? This is true in such a sense that a many-sorted
logic LI can be reduced to a rst order logic L. This will be done now.
First we assume that we have equality as a basic symbol between the logical symbols
which will be interpreted standardly. In fact we have in LI for every sort a symbol =i
which is interpreted by the set f(s; s) : s 2 S i g in a LI -structure S I :
Theorem 5.1.7. Many-sorted logic with identity is a conservative extension of many-
sorted logic.
168 V. Other Logics
Proof. This will be done similar as in section 1.10.
Fix a many-sorted language LI = LI (C ; F ; P ) where I is countable. To simulate this
language in rst order logic take
L = L(C ; F ; P [ fPi : i 2 I g)
where the new predicate symbols will be interpreted by the universes of an LI -structure.
Now we translate the syntactical part of LI into L. Let + : VI ! V be a one-one map-
ping of the variables of LI onto the variables of L. That is the only reason we assumed
I to be countable.
De nition 5.1.8. We de ne the L-term t+ for LI -terms t by induction on the de -
nition of t:
1. If t is a variable xi , then t+ is a variable of L just de ned above.
2. If t is a constant symbol c, then t+ is the constant symbol with the same name
in L.
3. If t is a term ft1 : : :tn , then t+ is the L-term ft+1 : : :t+n , where f is the function
symbol in L.
De nition 5.1.9. We de ne the L-formula F + for an LI -formula F by induction on
the de nition of F.
1. If F is Pt1 : : :tn, where #P = (i1 ; : : : ; in), then let F + be the L-formula
Pi (t+1 ) ^ : : : ^ Pin (t+n ) ^ Pt+1 : : :t+n :
1

## 2. If F is :G; then let F + be :G+ and if F is G ^ H, let F + be G+ ^ H + :

3. If F is the formula 8xi G, then F + is the L-formula
8x(Pi(x) ! G+ );
and if F is 9xiG, then F + is given by
9x(Pi(x) ^ G+ )
where x is the variable (xi )+ :
Now we are going to de ne a set of L-sentences which will describe the ontology of an
LI -structure in such a way that if we have an L-model S of the translation of these
sentences we can easily recover an LI -structure S I out of S :
De nition 5.1.10. Let OntI be the set of the following LI -formulas.
 9xi(yi = xi) for any variable yi of sort i:
 9xi(xi = c) for any constant symbol c with #c = i:
 8xi : : : 8xin 9xin (fxi : : :xin = xin ) for any function symbol f with #f =
1 +1 1 +1

## (i1 ; : : : ; in; in+1).

5.1. Many-Sorted Logic 169
Here we have suppressed the sort of the equality symbols since they are translated by
the same symbol. Now let Ont by the translation of OntI into the rst order language
L.
Lemma 5.1.11. For any LI -structure S I and any S I -assignment I we have
S I j= OntI [I ]:
This is obvious by the de nition of OntI .
Theorem 5.1.12. If S is an L-structure with S j= Ont[], then there is an LI -
structure S I such that for any LI -formula F we have
S j= F + [] , S I j= F[I ]
where it is I (x) = ((xi )+ ):
Proof. Let S = (S; C ; F; P [ fSi : i 2 I g) be given, where Si = PiS for i 2 I. Now
de ne S I as follows.
 (Si )i2I is given by the interpretation of the Pi's.
 c is interpreted by cS . Then we have cS I 2 Si for #c = i since
S j= (9xi (xi = c))+
which is in fact
S j= 9x(Pi (x) ^ x = c):
 f with #f = (i1 ; : : : ; in ; in+1) is interpreted by
f S  Si  : : :  Sin
1

## Since we have assumed S j= Ont[] we know

f S I : Si  : : :  Sin ! Sin :
1 +1

##  P with #P = (i1 ; : : : ; in) is interpreted by

P S \ (Si  : : :  Sin ):
1

## Furthermore I is an S I -assignment because S j= Ont[] we know

S j= 9x(Pi(x) ^ x = (yi )+ )[];
i.e. it is ((yi )+ ) 2 Si : Now we have to prove
S j= F + [] , S I j= F[I ]:
This is by induction on the de nition of F and quite easy.
170 V. Other Logics
Theorem 5.1.13. If S I is an LI -structure, then there is an L-structure S such that
for any S I -assignment I there is an S -assignment  with
S I j= F[I ] , S j= F + []
holds for all LI -formulas F:
Proof. Let S I = ((Si)i2I ; C ; F; P) be given. Then de ne S as follows
 S = Si2I Si
 cS = cS I for c 2 C
 f S : S n ! S is de ned by
( S I
f S (s1 ; : : : ; sn) = f (s1 ; : : : ; sn ) if this is de ned
s otherwise.
Here s is an arbitrary element of S: We will mention here that
I
f S (s1 ; : : : ; sn)
is de ned only for s1 2 Si ; : : : ; sn 2 Sin if #f = (i1 ; : : : ; in ; in+1):
1

SI
 P S = P for P 2 P :
Now we only have to de ne  by ((xi )+ ) = I (xi ): Here we make use of + : VI ! V
being one-one and onto.
S I j= F[I ] , S j= F + []
is proved by induction on the de nition of F: This is easy and left to the reader.
Theorem 5.1.14 (Compactness for many-sorted logic). Let M be a set of LI -
formulas such that every nite subset of M is consistent, then M is consistent.
Proof. Let M be nitely consistent. Then by Lemma 5.1.11 M [ OntI is nitely
consistent. By Theorem 5.1.13 M + [ Ont is a nitely consistent set of rst order
formulas. By the compactness theorem for rst order logic (with identity) M + [ Ont
is consistent. Using Theorem 5.1.12 M is consistent.
Theorem 5.1.15 (Lowenheim-Skolem for many-sorted logic). Let S I be an LI -
structure and  = card(S I )  !:
a) For any in nite  with card(LI )     and any S0I  S I ; i.e. it is S0i  S i for
all i 2 I, with card(S0I )   there is an elementary substructure S 0 I such that
S0I  S 0 I and card(S 0 I ) = :
b) For any   maxf; card(LI )g there is an elementary extension S 0 I  S I such
that card(S 0 I ) = :
5.1. Many-Sorted Logic 171
Proof.
a) By Theorem 5.1.13 we nd an L-structure S with
card(S) = card(S I ) = 
such that for any S I -assignment I we have a  with
S I j= F[I ] , S j= F + []:
Especially S j= Ont[]: By the Lowenheim-Skolem upwards theorem for L we
nd an elementary substructure S 0  S with S0I  S 0 and card(S 0 ) =  such
that
(5.1) S j= F + [] , S 0 j= F + []
for any S 0 -assignment . Since we had S = S I we obtain by the proof of 2.2.10
[
S 0 = Si0 ;
i2I
so if we applyI Theorem 5.1.12
I
to the L-structure S 0 , we get an L0I -structure S 0 I
0 0
with card(S ) = card(S ) = : By (5.1) we have
S 0 j= Ont[]
for the above . Therefore we have (using the de nitions of  and I in 5.1.12
and 5.1.13, respectively) by Theorem 5.1.12
S I j= F[I ] , S 0 I j= F[I ]
for any S 0 I -assignment I . Thus S 0 I  S I :
b) This follows from compactness and Lowenheim-Skolem upwards as we remarked
for rst order logic in section 2.2.
We will close this section by determining a complete calculus for many-sorted logic.
Here we only have to change the points 2. and 5. of De nition 1.7.1 to de ne M I F
as follows
2I : Fxi (t) ! 9xi F for all LI -formulas F and terms t of sort i.
5I : M I F ! G implies M I 9xiF ! G for xi 2= FV(M [ fF g)
The other axioms and the other rule do not change. By adapting the proof of the
completeness theorem in section 1.7 we obtain the following result.
Theorem 5.1.16 (Completeness theorem for I ). Let M and F be LI -formu-
las. Then we have M I F i for all LI -structures S I and all S I -assignments  with
S I j= M[] we have S I j= F[]:
172 V. Other Logics
From the above observation we might learn that the logics LI do not enable us to state
more powerful propositions than rst order logic. Let's review this section. The reasons
why many-sorted logic can be embedded into rst order logic is that the semantics of
LI can be expressed in terms of rst order logic (cf. De nition 5.1.10). I.e. semantics
of many-sorted logic is in fact a rst order semantics.

Exercises
E 5.1.1. Finish the proofs of Theorem 5.1.12 and Theorem 5.1.13.
E 5.1.2. Prove the omitting types theorem for many-sorted logic: let T be a countable
and consistent LI -theory, and let M be a set of formulas with free variables among
xi1 ; : : : ; xinn without a T -generator. Then there is a countable model S I of T such that
1

## no (xi1 ; : : : ; xinn )-type in S contains M.

1

5.2 ! -Logic
If we take a two-sorted language and a xed structure S in which we will interpret one
of the two sorts of variables we obtain so-called S -logic.
In this section we take IN (= !) as the xed structure. But now we de ne syntax
and semantics of L! .
De nition 5.2.1. L! is a two-sorted language with identity. We denote the two sorts
of variables by
n; m; n0; : : :
x; y; z; : : :
Furthermore, for every n 2 IN we have a constant symbol n in L! . This nishes the
de nition of the syntax of L! is a two-sorted language. The di erence to a two-sorted
logic is given by the semantics. Here we want that the rst sort ranges over the natural
numbers.
De nition 5.2.2. An L!-structure S! is a structure for the two-sorted language L!
such that
S! = ((IN; S); C ; F; P)
and with
nS! = n 2 IN:
Our rst observation for !-logic is the failure of the compactness property.
Theorem 5.2.3. !-logic does not have the compactness property.
Proof. Let L! have a constant symbol c of the sort of natural numbers and take
M = fn 6= c : n 2 INg:
Then M is nitely consistent in !-logic as it is in rst order logic. But in !-logic M
is inconsistent since we have to interpret c by a natural number.
5.2. !-Logic 173
In !-logic we are able to express the torsion property of a group. Remember that due
to the compactness theorem this was not possible in rst order logic (cf. the end of
section 1.5). In !-logic we have to add some axioms to the list of rst order axioms of
group theory:
x0 = 1
xn+1 = xn  x
If we have Ax!GT is AxGT plus these axioms, we obtain that
S! j= Ax!GT i S! is a group
and for a structure with S! j= Ax!GT we have
S! j= 8x9n(n 6= 0 ^ xn = 1) i S! has the torsion property.
Theorem 5.2.4. !-logic neither has the Lowenheim-Skolem upward property nor the
Lowenheim-Skolem downward property.
Proof. To refute the Lowenheim-Skolem properties we de ne a language L! and an
L! -sentence F such that for all L! -structures S! we have
S! j= F ) card(S) = @0 :
Therefore take a function symbol f into L! and de ne F expressing that f S! is a
one-one map from IN onto S. I.e. let F be the formula
8x9n(f(n) = x) ^ 8n8m(f(n) = f(m) ! n = m):

Now we will introduce a calculus for !-logic. This will be done in the following way:
we de ne M ! F similarly to section 5.1 where we de ned a Hilbert style calculus
for many-sorted logic and add the clause
6. M ! Fn(m) for all m 2 IN implies M ! 8nF (!-rule)
Unlike all rules which have been considered previously, the !-rule is not a nite rule:
it requires in nitely many premises.
Lemma 5.2.5 (!-soundness theorem). M ! F implies M ! F
Proof. Here by M ! F we denote that for all L! -structures S! with
S! j= M we have S! j= F:
All we have to check is the !-rule. But it is sound since we have
S! j= 8nF i S! j= Fn(m) for all m 2 IN:

A more surprising fact is the completeness of the calculus with !-rule with respect to
!-structures. This was uncovered by L. Henkin (1954) and S. Orey (1956).
174 V. Other Logics
Theorem 5.2.6 (!-completeness theorem). Let L! be countable and M a set of
L! -formulas. Then
M ! F implies M ! F:
Proof. Suppose M 6 ! F: Think of L! as a two-sorted language and denote the
two-sorted calculus by M ` F (as de ned in section 5.1, but only for the two-sorted
language L! ). Now set
T = fG : M ! G and G is a sentence g:
Since M 6 ! F we have
T 0 = T [ f:F g
is consistent (in the two-sorted sense) because of
(5.2) T ` G , M ! G:
Now let m be a variable and
= fm 6= n : n 2 INg:
0
Next we prove that has no T -generator (in two-sorted sense). Therefore suppose
G is a T 0 -generator of :
Then we have for each n 2 IN
T 0 ` G ! m 6= n:
Substituting n for m and using the identity axioms we obtain
T 0 ` :Gm (n):
But this entails
T ` :F ! :Gm (n):
Using (5.2), the !-rule and (5.2) again we get
T ` :F ! :G;
and this implies
(5.3) T 0 ` :G:
But we wanted that G is a T 0-generator, and this means T 0 [fGg is consistent contrary
to (5.3). Thus we have proved that has no T 0 -generator. By the omitting types
theorem for many-sorted logic (cf. Exercise E 5.1.2) we get a model S of T 0 such that
no 1-type in S contains . This means that S is a !-model, say S! , because for every
s 2 S in the sort of the natural numbers there is an n 2 IN such that
S! j= m = n[s]:
Furthermore we have
S! j= M and S! j= :F:
But this means M 6 ! F:
Here we close our observations about !-logic.
5.3. Higher Order Logic 175
5.3 Higher Order Logic
In this section we deal with second order logic and mention how to change or extend the
de nitions to obtain third or even higher order logic. Second order logic is intended
to express properties of elements and subsets (second order objects) of a non- xed
structure. In addition third order logic includes statements concerning subsets of
subsets (third order objects) of that structure. We will see that second (and of course
higher) order logic is at least as strong as !-logic. That is why we cannot expect to
have the compactness or Lowenheim-Skolem property.
De nition 5.3.1. L2 is a two-sorted language with identity. We denote the two sorts
of variables by
x; y; z; : : :
X; Y; Z; : : :
Furthermore, we have a binary predicate symbol  with # = (1; 2), i.e. we have
formulas of kind xX.
In general we will have for nth order logic an n-sorted language Ln containing predicate
symbols 1 ; : : : ; n 1 with #i = (i; i + 1); i = 1; : : : ; n 1. The semantics will be
declared as follows.
De nition 5.3.2. An L2-structure S2 is a structure for the two-sorted language L2
such that
S2 = ((S; Pow(S)); C ; F; P)
and with S =2, i.e. the interpretation of the -symbol is the element relation. S is
2

## called the rst order part of S2.

For third order logic we consider three-sorted structures of the form
S3 = ((S; Pow(S); Pow(Pow(S))); C ; F; P)
and interpret 1; 2 by the 2-relation. This means third order logic extends second
order logic, and so we will restrict us to second order logic from now on. All results
obtained for second order logic are valid for all higher order logics.
At rst we are going to de ne the natural numbers in L2PA , where L2PA is the
second order version of LPA (cf. section 1.4.1) with a constant symbol N for the
natural numbers. The axiom system PA2 is formulated in L2PA , comprises PA (with
the induction scheme extended to L2PA formulas) and contains in addition the axioms
(N1) 0N ^ 8x(xN ! S(x)N)
(N2) 8X(0X ^ 8x(xX ! S(x)X) ! 8x(xX ! xN))
These two axioms express that N is a constant symbol for the least set containing 0
which is closed under the successor function, i.e. the set IN.
176 V. Other Logics
Theorem 5.3.3. Let S2 j= PA2. Then the rst order part S of S2 is isomorphic to
N , the structure of the natural numbers.
Proof. Using the induction scheme and (N1) we obtain
PA2 j= 8x(xN):
But this means S = N S . Because of (N2) the rst order part can be extended to a
2

( rst order) NT model satisfying (Ind) of Theorem 4.1.2. By that theorem we get the
isomorphism between S and N .
But this means that second order logic is quite powerful: using second order logic we
can x the natural numbers. This con icts with the compactness and Lowenheim-
Skolem properties.
Theorem 5.3.4. Second order logic does not have the compactness property.
Theorem 5.3.5. Second order logic neither has the Lowenheim-Skolem upward prop-
erty nor the Lowenheim-Skolem downward property.
Using a certain modi cation of the semantics of second order logic we will be able to
talk about subsets of a structure without losing a rst order semantics. We will call
this logic weak second order logic. The syntax of Lw2 is the same as of L2.
De nition 5.3.6. An Lw2 -structure S2w is a structure for the two-sorted language Lw2
such that
S2w = ((S1 ; S2); C ; F; P)
with S2  Pow(S1 ) and S w =2. S1 is called the rst order part of S2w ; S2 is the second
2

order part.
But we can also regard Lw2 simply as a two-sorted language. A two-sorted structure
for Lw2 (viewed not as weak second order logic)
S 2 = ((S1 ; S2 ); C ; F; P)
is called regular if S2  Pow(S1 ) and S =2. So regular structures are just weak
2

## second order structures.

Theorem 5.3.7. Regard Lw2 as a two-sorted logic. Then every two-sorted structure
S 2 for Lw2 is isomorphic to a regular structure, i.e. to a weak second order structure
for Lw2 .
Proof. We de ne ' : S2 ! Pow(S1 ) by
'(S) = fs 2 S1 : s S S g: 2

## Furthermore, let S10 = S1 , S20 = f'(s) : s 2 S2 g and C 0 ; F0 ; P0 be homomorphic

interpretations of the non-logical symbols under '. Then
S 02 = ((S10 ; S20 ); C 0 ; F0 ; P0)
5.3. Higher Order Logic 177
is a regular structure and ' is a homomorphism onto S20 . ' is one-one since sets are
equal i they have the same elements.
As a corollary to this theorem we can transfer results of many-sorted logic to weak
second order logic.
Theorem 5.3.8 (Compactness for weak second order logic). Let M be a set of
Lw2 -formulas such that every nite subset of M is consistent, then M is consistent.
Theorem 5.3.9 (Lowenheim-Skolem for weak second order logic). Let S2w be
an Lw2 -structure and  = card(S1 [ S2 )  !:
a) For any in nite  with card(Lw2 )     and any sets S10 wS1 and S20  S2 with
card(S10 [ S20 )  , there is an elementary substructure S 0 2 such that S10  S10 ,
S20  S20 and card(S10 [ S20 ) = :
b) For any   maxf; card(Lw2 )g there is an elementary extension S 0 w2  S2w such
that card(S10 [ S20 ) = :
At last we will mention that the two-sorted calculus, de ned at the end of section 5.1
is complete for Lw2 , too.

Exercises
E 5.3.1. Prove Theorem 5.3.4 and Theorem 5.3.5.
E 5.3.2. Prove compactness and Lowenheim-Skolem properties for weak second order
logic.
E 5.3.3. Make explicit the complete calculus for weak second order logic, mentioned
at the end of the section.
E 5.3.4. Lfin2 is called nite second order logic, which is obtained by changing the
semanticsfinof L2 as follows: second order objects are nite sets of rst order objects.
I.e. an L2 -structure is given by
S2fin = ((S; Fin(S)); C ; F; P);
where Fin(S) is the set of all nite subsets of S. Prove:
a) For every Lfin
2 -sentence F there is an L2-sentence G with
S2fin j= F i S2 j= G:
b) Lfin
2 does not have the compactness property.
c) fin
L2 has the Lowenheim-Skolem property.
Remark: This part of the exercise is dicult.
178 V. Other Logics
Appendix A

## A.1 The Arithmetisation of NT

In this section we slightly modify the language LNT of number theory. We have in
LNT
 variables x0; x1; : : : in a xed enumeration
 constant symbol 0
 function symbols f for each primitive recursive function f
 relation symbol =
 connectives :; ^
 quanti er 8
So we think of _; !; 9 as de ned symbols. In di erence to section 4.1 we do not have
relation symbols for primitive recursive relations, which means no loss of generality
since we may represent every primitive recursive relation by its characteristic function.
The aim of this section is to prove the existence of an arithmetisation satisfying
the hypotheses I.{IV. and (G1), (G2), (G3) of section 4.2. It will turn out that a proof
of (G3) is quite cumbersome. According to code primitive recursive functions we code
the function symbols of LNT , i.e. we have an arithmetisation pf q for function symbols
f of LNT : For example we have (cf. section 3.3)
 pCknq = h0; n; ki
 pPknq = h1; n; ki
and so on.
De nition A.1.1. We de ne the Godel number ptq for LNT -terms t recursively:
1. pxnq = h6; ni for the variable xn
2. p0q = h7i for the constant symbol 0
180 Appendix
3. pft1 : : :tn q = h8; pf q; pt1q; : : : ; ptnqi for a function symbol
f with #f = n and LNT -terms t1; : : : ; tn
De nition A.1.2. We de ne the Godel number pF q for LNT -formulas F inductively:
1. ps = tq = h9; psq; ptqi for LNT -terms s; t:
2. p:F q = h10; pF qi for an LNT -formula F
3. pF ^ Gq = h11; pF q; pGqi for LNT -formulas F; G
4. p8xnF q = h12; pxnq; pF qi for a variable xn and an LNT -formula F
Proposition A.1.3. The following relations are primitive recursive:
a) V ar(n) , n is code of a variable
b) FSymb(n) , n is code of a function symbol
c) T erm(n) , n is code of a term
d) Fml(n) , n is code of a formula
e) V arT (n; m) , n is code of a variable occurring in the term with code m
f) V arF (n; m) , n is code of a variable occurring in the formula with code m
g) FV (n; m) , n is code of a free variable of the formula with code m
h) BV (n; m) , n is code of a bound variable of the formula with code m
i) Sent(n) , n is code of a sentence.
Proof. This is an easy exercise.
So we have established the rst hypothesis. The next aim of this section is to de ne
the function subn of the second hypothesis. There are two substitution functions, one
for substitution inside terms and one for substitution inside formulas.
Proposition A.1.4. The function substT with
substT (psq; pxq; ptq) = psx (t)q
is primitive recursive.
Proof. De ne by course-of-values recursion on x
8 x if V ar(x) ^ V ar(y) ^ Term(z) ^ x 6= y
>
>
>
< z if V ar(x) ^ V ar(y) ^ Term(z) ^ x = y
substT (x; y; z) = > h(x)0 ; (x)1; substT ((x)2; y; z); : : : ; substT ((x)lh(x) _ 1 ; y; z)i
>
> if Term(x) ^ :V ar(x) ^ V ar(y) ^ Term(z)
: 0 otherwise.
Here the third point of the de nition of substT should be read as
substT (pft1 : : :tnq; y; z) = pf(substT (t1 ; y; z)) : : :(substT (tn; y; z))q:
A.1. The Arithmetisation of NT 181
Proposition A.1.5. The function subst with
subst(pF q; pxq; ptq) = pFx (t)q
is primitive recursive.
Proof. De ne by course-of-values recursion on x
8
>
> h(x)0 ; substT ((x)1; y; z); substT ((x)2 ; y; z)i
>
> if Fml(x) ^ V ar(y) ^ Term(z) ^ (x)0 = 9
>
> h(x) 0 ; subst((x)1; y; z)i
>
> if Fml(x) ^ V ar(y) ^ Term(z) ^ (x)0 = 10
>
< h(x)0 ; subst((x)1; y; z); subst((x)2 ; y; z)i
subst(x; y; z) = > if Fml(x) ^ V ar(y) ^ Term(z) ^ (x) = 11
>
> 0
>
> h(x) 0 ; (x)1 ; subst((x)2 ; y; z)i
>
> if Fml(x) ^ V ar(y) ^ Term(z) ^ (x)0 = 12 ^ (x)1 6= y
>
> x if Fml(x) ^ V ar(y) ^ Term(z) ^ (x)0 = 12 ^ (x)1 = y
>
: 0 otherwise.
Thus subst is primitive recursive.
Corollary A.1.6. There is a primitive recursive function subn such that
subn (pF q; pt1q; : : : ; ptnq) = pFx ;::: ;xn (t1 ; : : : ; tn)q:
1

## Proof. Use Proposition A.1.5.

De nition A.1.7. We de ne the LNT -term n by recursion on n 2 IN by:
1. 0 is the constant symbol 0
2. n + 1 is the term Sn, where S is the function symbol for the successor function
S.
Proposition A.1.8. There is a primitive recursive function N with
N(n) = pnq
Proof. De ne by recursion N(0) = p0q and
N(n + 1) = pn + 1q = pSnq = h8; pS q; N(n)i:

The last lemma has established the third hypothesis. To de ne the proof predicate we
have to x a calculus for rst order logic. We opt for a Hilbert-style calculus. There
we have the following axioms:
182 Appendix
 sententially valid formulas
 8xF ! Fx(t)
 de ning axioms for = (cf. 1.10.2)
Non-logical axioms of NT are given by
 de ning axioms for the function symbols of LNT
 the induction scheme (IND) (cf. section 4.1)
Proposition A.1.9. The following relations are primitive recursive:
a) PP(n; m) , n is code of a propositional part of the formula with code m
b) PA(n; m) , n is code of a propositional atom of the formula with code m
c) BA(n; m) , m is code of a boolean assignment on the propositional atoms of
the formula with code n
Proof. Parts a) and b) are established easily. For part c) observe that we have
BA(n; m) ,Fml(n) ^ Seq(m) ^ lh(m) = n ^
8x  n((PA(x; n) ! (m)x  1) ^
(:PA(x; n) ! (m)x = 2):
Proposition A.1.10. There is a primitive recursive function Bval with
Bval(pF q; n) =truth value of the formula F under
the boolean assignment n:
Proof. De ne by course-of-values recursion on x
8 (y) if Fml(x) ^ BA(x; y) ^ ((x) = 9 _ (x) = 12)
> x
>
0 0
>
> 1 _ Bval((x)1 ; y)
< if Fml(x) ^ BA(x; y) ^ (x)0 = 10
Bval(x; y) = > Bval((x) ; y)  Bval((x) ; y)
> 1 2
>
> if Fml(x) ^ BA(x; y) ^ (x)0 = 11
:
0 otherwise

## Lemma A.1.11. There is a primitive recursive relation PropAx with

PropAx(pF q) , F is a sentential valid formula.
A.1. The Arithmetisation of NT 183
Proof. By the de nition of BA we have
BA(n; m) ) m  h2; : : :2}i  p(n)3  n:
| {z
n-times
Thus we can de ne
PropAx(x) , 8y  p(x)3  x (BA(x; y) ) Bval(x; y) = 1)
and PropAx is primitive recursive.
Proposition A.1.12. There are primitive recursive relations
QuantAx; EqAx; NonLogAx; Ax
with:
1: QuantAx(pF q) , F has the shape 8xG ! Gx (t)
2: EqAx(pF q) , F is one of the de ning axioms for =
3: NonLogAx(pF q) , F is a nonlogical axiom of NT
4: Ax(pF q) , F is an axiom of NT
Proof. This is a cumbersome but quite easy exercise.
Up to now we only have described the axioms of the Hilbert-style calculus. We take
the following rules:
 ` F and ` F ! G ) ` G (modus ponens)
 ` F ! G and x 2= FV(F) ) ` F ! 8xG (generalisation)
Proposition A.1.13. The following relations are primitive recursive:
1: MP(n; m; k) , n = pF q and m = pF ! Gq and k = pGq
2: Gen(n; m) , n = pF ! Gq and m = pF ! 8xGq and :FV (pxq; pF q):
Now we have nished our preparation to obtain the following result:
Lemma A.1.14. There is a primitive recursive relation Proof with
Proof(n; m) , n codes a proof of the formula with code m:
Proof. We de ne
Proof(x; y) , Seq(x) ^ 8k < lh(x)[Ax((x)k ) _
9i; j < k MP((x)i; (x)j ; (x)k ) _
9i < k Gen((x)i ; (x)k )] ^ y = (x)lh(x) _ 1:
This result con be extended easily to extensions T of NT which are given primitive
recursively.
184 Appendix
Lemma A.1.15. Let T be a primitive recursive extension of NT, i.e. fpF q : F 2 T g
is primitive recursive. Then there is a primitive recursive relation ProofT with
ProofT (n; m) ,n codes a T-proof of the
formula with code m:
Proof. Set AxT (x) , Ax(x) _ x 2 fpF q : F 2 T g. Then AxT is primitive recursive.
So de ne ProofT as Proof in Lemma A.1.14 substituting Ax by AxT :
To prove the properties (G1){(G3) we rst have to make some observations.
Theorem A.1.16. Let f be a primitive recursive function and f the corresponding
function symbol in LNT : Then we have for all k1; : : :kn; m 2 IN
f(k1 ; : : :kn) = m ) NT ` f(k1 ; : : : ; kn) = m:
Proof. By induction on the de nition of f:
1. f = Ckn
Then we have f(k1 ; : : : ; kn) = k and NT ` 8x1 : : : 8xnCkn(x1 ; : : : ; xn) = k: Thus
NT ` f(k1 ; : : : ; kn) = k:
2. f = Pln
Then we have f(k1 ; : : : ; kn) = ki and NT ` 8x1 : : : 8xnPin(x1 ; : : : ; xn) = xi :
Thus
NT ` f(k1; : : : ; kn) = ki :
3. f = S
Then we have f(k) = k + 1 and NT ` S(k) = k + 1 by de nition of k + 1:
4. f = Sub(g; h1 ; : : : ; hm )
Let f(k1 ; : : : ; kn) = k: Then we have g(n1; : : : ; nm ) = k with ni = hi (k1; : : :kn)
for i = 1; : : : ; m: By induction hypothesis we have
NT ` g(n1 ; : : : ; nm ) = k
and for i = 1; : : : ; m
NT ` hi (k1; : : : ; kn) = ni :
Using the de ning axioms for Sub(g; h1 ; : : : ; hm ) we obtain
NT ` f(k1 ; : : : ; kn) = k:
5. f = R(g; h)
Let f(k1 ; : : : ; kn) = k: We prove the claim by side induction on kn:
A.1. The Arithmetisation of NT 185
(a) kn = 0
Then f(k1 ; : : : ; kn) = g(k1 ; : : : ; kn 1) = k. The induction hypothesis yields
NT ` g(k1; : : : ; kn 1) = k
and the de ning axioms for R(g; h) give:
NT ` f(k1; : : : ; kn) = k:
(b) kn = m + 1
Then f(k1 ; : : : ; kn) = h(k1; : : : ; kn 1; m; l) with f(k1 ; : : : ; kn 1; m) = l:
Then the main hypothesis yields
NT ` h(k1; : : : ; kn 1; m; l) = k
and the side induction hypothesis
NT ` f(k1; : : : ; kn 1; m) = l:
Because of the de ning axioms for R(g; h) we have
NT ` 8x1 : : : 8xn 18yf(x1 ; : : : ; xn 1; Sy) =
h(x1; : : : ; xn 1; y; f(x1; : : : ; xn 1; y))
This proves
NT ` f(k1; : : : ; kn) = k:

## By this theorem we have for a primitive recursive relation R and k1 ; : : : ; kn 2 IN

R(k1; : : : ; kn) ) R (k1 ; : : : ; kn) = 1
) NT ` R (k1 ; : : : ; kn) = 1
and
:R(k1; : : : ; kn) ) R (k1 ; : : : ; kn) = 0
) NT ` R (k1; : : : ; kn) = 0:
So this gives a precise justi cation for omitting predicate symbols in the language
LNT : Now we think of predicate symbols P for primitive recursive relations P as
abbreviations, i.e. we write
NT ` P(t1; : : : ; tn)
NT ` P (t1 ; : : : ; tn) = 1:
Now by the de nition of ProofT we have the following lemma:
186 Appendix
Lemma A.1.17. T ` F , there is an n 2 IN with NT ` ProofT (n; pF q):
Proof. We have
T ` F , there is an n 2 IN with ProofT (n; pF q)
, there is an n 2 IN with NT ` ProofT (n; pF q)
since the predicate ProofT is primitive recursive by Lemma A.1.15.
Corollary A.1.18. T ` F ) NT ` PrvblT (pF q):
Thus IV.1. and (G1) have been proved. Now we turn to (G2).
Proposition A.1.19. NT ` PrvblT (pF q) ^ PrvblT (pF ! Gq) ! PrvblT (pGq)
Proof. Now we argue informally inside the theory NT: There we have the premises
PrvblT (pF q)
and
PrvblT (pF ! Gq):
Thus we have x; y with
ProofT (x; pF q)
ProofT (y; pF ! Gq):
Because we also have
MP(pF q; pF ! Gq; pGq)
we obtain with z = x_ y_ hpGqi
ProofT (z; pGq):
This means PrvblT (pGq):
Using a similar proof we obtain the following result:
Proposition A.1.20.
NT ` PrvblT (pF ! Gq) ^ :FV (pxq; pF q) ! PrvblT (pF ! 8xGq)
The following theorem establish hypothesis IV.2.
Theorem A.1.21. Let T be a !-consistent primitive recursive theory extending NT.
Then we have
T ` PrvblT (pF q) ) T ` F:
A.1. The Arithmetisation of NT 187
Proof. Assume T 6` F: Then we have for all n 2 IN
:ProofT (npF q):
By Theorem A.1.16 we obtain for all n 2 IN
NT ` :ProofT (n; pF q):
Since T extends NT this gives for all n 2 IN
T ` :ProofT (n; pF q):
Because of the !-consistency of T we get
T 6` 9xProofT (x; pF q):

Now the only hypothesis of the previous section to check is (G4). Therefore we are
going to prove `formalized' versions of Theorem A.1.16.
De nition A.1.22. We are going to de ne pF x_ 1 : : : x_ nq recursively on n (outside of
NT) as follows
pF x_ 1q = subst(pF q; px1q; N(x1))
pF x_ 1 : : : x_ n q = subst(pF x_ 1 : : : x_ n 1q; pxnq; N(xn)):

I.e. pF x_ 1 : : : x_ n q is a term with free variables x1; : : : ; xn. It denotes (codes up) the
formula F after replacing the free variables by numerals.
Proposition A.1.23.
F x_ 1 : : : x_ nqx ;::: ;xn (k1 ; : : : ; kn) = pFx ;::: ;xn (k1 ; : : : ; kn)q
p 1 1

## Theorem A.1.24. Let f be a primitive recursive function symbol. Then there is a

primitive recursive function g (depending on f) such that
NT ` fx1 : : :xn = xn+1 ! Proof(gx1 : : :xn; pf x_ 1 : : : x_ n = x_ n+1 q):
We do not give a proof of this theorem only because it is quite cumbersome. The
advanced reader only has to check the proof of Theorem A.1.16 and may be able
to construct the function g. Moreover we have all means of proof available in NT
because the induction used to prove Theorem A.1.16 may be replaced inside of NT by
an application of the induction scheme.
Now we turn to the last point, the proof of (G3):
Corollary A.1.25. NT ` PrvblT (pF q) ! PrvblT (pPrvblT (pF q)q)
Proof. By Theorem A.1.24 we have
NT ` ProofT (x; pF q) ! PrvblT (pProofT (x;_ pF q)q):
Using Corollary A.1.18 and Proposition A.1.19 we obtain
NT ` PrvblT (pProofT (x;_ pF q)q) ! PrvblT (pPrvblT (pF q)q):
This gives the desired result.
188 Appendix
A.2 Naive Theory of the Ordinals
In this section we are going to present a non-set-theoretical access to the theory of the
ordinals. This is not the correct access to ordinals since there are still some standard
antinomies which cannot be clari ed in this context. But this entry to the theory of
ordinals has been chosen by us since one can imagine what an ordinal should be seen
as better in this setting than in set theory. We are going to give the foundation for
the facts used in section 1.4 about ordinals.
De nition A.2.1. Let R be a binary relation on a set S; i.e. R  S  S:
a) The eld of R is given by the set:
d(R) = fx 2 S : 9y2S((x; y) 2 R _ (y; x) 2 R)g:
b) R is a linear ordering, if
 R is irre exive, i.e. 8x2 d(R)((x; x) 2= R)
 R is transitive, i.e.
8x; y; z 2 d(R)((x; y) 2 R ^ (y; z) 2 R ) (x; z) 2 R)
 R is linear, i.e.
8x; y2 d(R)((x; y) 2 R _ x = y _ (y; x) 2 R):
If R is a linear ordering we often write x <R y or more brie y x  y instead
of (x; y) 2 R:
c) R is a well-ordering, if R is a linear ordering and
 8X(X  d() ^ X 6= ; ) 9y2X 8z 2X(y <R z _ y = z));
i.e. every non-empty subset of the eld of R has a minimal element.
The main feature of well-orderings is the principle of trans nite induction.
Lemma A.2.2 (Trans nite induction). If  is a well-ordering, then
8x8y(y  x ! (E(y) ! E(x)))
implies
8x2 d()E(x);
where E is any property.
Proof. Towards a contradiction assume that there is an x 2 d() such that :E(x):
Then the set
M = fx 2 d() : :E(x)g  d()
is nonempty. Because  is a well-ordering M has a minimal element w.r.t. , say x0 .
I.e. it is x0 2 M with
8x2M(x0  x _ x0 = x):
A.2. Naive Theory of the Ordinals 189
From this we can conclude
8yx0 (y 2= M);
i.e.
8yx0 E(y):
Now, using the premise 8x8y(y  x ! (E(y) ! E(x))) we obtain E(x0), i.e. x0 2= M:
Lemma A.2.3. A linear ordering  is a well-ordering i there are no in nite de-
scending sequences, i.e. there is no sequence (xn )n2IN  d() such that for all
n 2 IN xn+1  xn:
Proof. First let  be a well-ordering and (xn)n2IN a sequence. De ne z = minfxn :
n 2 INg: Then it is z = xk for a k 2 IN: So we have xk  xn for all n 2 IN and (xn)n2IN
is not a descending sequence.
Now for the other direction assume  is not a well-ordering. Then there is a
nonempty X  d() without a minimalelement. So there is a sequence (xn)n2IN  X
such that for all n 2 IN it is xn+1  xn:
Now we are going to de ne an equivalence relation on the class of all well-orderings.
De nition A.2.4. We call two well-orderings <1 and <2 equivalent, <1 <2 , if there
is an order preserving  : d(<1 ) onto
! d(<2 ); i.e.
x <1 y ) (x) <2 (y):
In a few seconds we will see that the equivalence of well-orderings is in fact an equiv-
alence relation. To establish this we need the following lemma.
Lemma A.2.5. If <1 ; <2 are linear orderings and  : d(<1 ) onto ! d(<2 ); is order
preserving, then  is one-one and (x) <2 (y) implies x <1 y:
Proof. Assume that there are x; y 2 d(<1 ) with x 6= y: Then by linearity of <1 we
have x <1 y or y <1 x: Because  is order preserving we obtain
(x) <2 (y)
or
(y) <2 (x):
Now <2 is irre exive and it follows
(x) 6= (y);
i.e.  is one-one. If x <1 y is false, we have x = y or y <1 x since <1 is linear. So we
can conclude (x) = (y) or (y) <2 (x); respectively. In neither case (x) <2 (y) is
possible.
190 Appendix
Lemma A.2.6.  is an equivalence relation on the class of all well-orderings.
Proof. Let <1; <2; <3 be well-orderings. If we set  : d(<1 ) ! d(<1 ); (x) = x,
then  is order preserving and onto. Thus we have
<1 <1 :
Now assume <1 <2 : We want to show <2 <1 : We have an order preserving  :
d(<1 ) onto
! d(<2 ): Because of Lemma A.2.5 we have  1 : d(<2 ) onto ! d(<1 ) and
 1 is order preserving, i.e.
<2 <1 :
At last assume <1 <2 and <2 <3 : We want to conclude <1 <3 : There are order
preserving mappings 1 : d(<1 ) onto! d(<2 ) and 2 : d(<2 ) onto! d(<3 ): Then we
have 2  1 : d(<1 ) onto
! d(<3 ) is order preserving, too. I.e. we have
<1 <3 :
Thus  is an equivalence relation.
For the rest of this section well-orderings are denoted by ; <1 ; <2; : : : To give some
easy examples we are going to visualise well-orderings in the following way:
x0 | x1 | x2 | : : :
This should be read as follows: we have a well-ordering  with eld
d() = fx0; x1; x2; : : : g
and this is ordered such that
x0  x1  x2  : : :
Using this notation we give only examples for well-orderings  with d()  IN: Here
we have
0|1|2|3
is obviously a well-ordering. It is equivalent to
2|1|3|0
(using which order preserving mapping?). And
0 | 1 | 2 | ::: | n | :::
is equivalent to
n | 0 | 1 | 2 | :::
A.2. Naive Theory of the Ordinals 191
De nition A.2.7.
a) <1 is an initial segment of <2 if <1 <2 ; i.e. x <1 y ) x <2 y; and
8x2 d(<1 )8y(y <2 x ) y <1 x):
If <1 6=<2 , then <1 is called a proper initial segment.
b) For z 2 d() de ne
 z = f(x; y) : x  y  z g:
 z is a proper initial segment of  :
c) Now de ne a relation < on the class of all well-orderings by
(<1 ; <2 ) 2< , 9z 2 d(<2 )(<1 <2 z):
Thus we have with
<1 = 0 | 1 | 2 | 3
<2 = 0 | 1 | 2 | : : : | n | : : :
<3 = 1 | 2 | 3 | : : : | n | : : : | 0
that <1 is a proper initial segment of <2, especially <1 =<2 4, and <2 <3 0: This
means (<2 ; <3) 2<.
Lemma A.2.8. Let <1 <2 : Then <1 is a proper initial segment of <2 i there is a
z 2 d(<2 ) such that <1 <2 z:
Proof. Let <1 be a proper initial segment of <2 : Then the set
M = fx 2 d(<2 ) : x 2= d(<1 )g
is not empty. De ne z = minM: Then we have <1 =<2 z:
De nition A.2.9.
a) An ordinal is an equivalence class [] of a well-ordering  w.r.t.  :
b) We de ne an ordering on the ordinals by
[<1] < [<2] , (<1 ; <2 ) 2<
where < is de ned as in A.2.7.
Lemma A.2.10. The relation < on the ordinals is well-de ned.
Proof. We have to show that if <1 <2 and <3<4 we have
(<2 ; <3) 2< ) (<1 ; <4 ) 2< :
Thus there are order preserving mappings
1 : d(<1 ) onto
! d(<2 )
onto
3 : d(<3 ) ! d(<4 )
192 Appendix
and there is a z 2 d(<2 ); an order preserving mapping.
2 : d(<2 ) onto
! d(<3 z):
By 3 an order preserving mapping is induced
3  z : d(<3 z) onto
! d(<4 3(z)):
Therefore we can conclude
3  z  2  1 : d(<1 ) onto
! d(<4 3 (z)):
Thus it is (<1 ; <4 ) 2< :
Lemma A.2.11. The class of all ordinals On is well-ordered by <.
Proof.
1. < is irre exive. Assume that there is a well-ordering  with [] < []: Then
there is a z 2 d() and a  : d() onto
! d( z), i.e. we have
(x)  z for all x 2 d():
Thus the set
M = fx 2 d() : (x)  xg
is not empty. Set x0 = minM, i.e. (x0)  x0 and because  is order preserving
we have ((x0))  (x0): But then (x0) 2 M and (x0 ) 2 M and (x0)  x0 =
2. < is transitive. We have [<1 ] < [<2] < [<3], i.e. there is a x 2 d(<2 ) such that
<1 <2 x and a mapping  : d(<2 ) onto
! d(<3 z) for one z 2 d(<3 ): Therefore
<1 <3 (x); i.e. [<1] < [<3 ]:
3. < is linear. Let <1; <2 be given. We construct a partial mapping
 : d(<1 ) ! d(<2 ):
Set 01 = min d(<1 ) and 02 = min d(<2 ): Now we de ne
(01) = 02:
Because <1 is a well-ordering we are allowed to de ne g by trans nite recursion.
Thus we think (y) de ned for all y <1 x: Now set
(
(x) ' minfz 2 d(<2 ) : z 2= f(y) : y <1 xgg if this exists
unde ned otherwise.
With this de nition we observe the following properties of :
A.2. Naive Theory of the Ordinals 193
(a) dom() is an initial segment of <1 .
Choose x 2 dom(): We prove
y <1 x ) y 2 dom()
by trans nite induction on y: Thus we have by the induction hypothesis
8z <1 y(z 2 dom()):
Because x 2 dom() we have
; 6= d(<2 ) n f(z) : z <1 xg  d(<2 ) n f(z) : z <1 yg:
Thus (y) = minfz 2 d(<2 ) : z 2= f(w) : w <1 ygg is de ned, i.e. y 2
dom():
(b) x <1 y ) (x) <2 (y)
This is clear by the de nition of :
(c) rg() is an initial segment of <2
Take y 2 rg() and x0 2 d(<1 ) with
y = (x0) = minfz 2 d(<2 ) : z 2= f(x) : x <1 x0gg:
For y0 <2 y it is y0 <2 (x0 ), i.e.
y0 2 f(x) : x <1 x0g  rg():
Now we distinguish the following cases:
Case 1: dom() = d(<1 ) and rg() = d(<2 ):
Then  is an order preserving mapping and onto, i.e.
<1 <2 :
Case 2: dom() = d(<1 ) and rg() 6= d(<2 ):
Then there is a z 2 d(<2 ) with
<2 \ rg() =<2 z:
So  : d(<1 ) onto
! d(<2 z) and we have [<1 ] < [<2 ]:
Case 3: dom() 6= d(<1 )
Then z = min( d(<1 ) n dom()) is de ned. Therefore we have
d(<2 ) = f(x) : x <1 z g
by the de nition of : This means <2 <1 z; i.e. [<2 ] < [<1]:
194 Appendix
4. < is well-founded.
Let X  On with X 6= ; and let [<0] 2 X: De ne
X0 = f[<1] 2 X : [<1] < [<0]g:
If X0 = ;, then it is [<0 ] < [<1] or [<0 ] = [<1] for all [<1] 2 X; i.e. [<0] = minX:
If X0 6= ;, then
A = fx 2 d(<0 ) : 9[<1] 2 X(<1 =<0 x)g = 6 ;:
So am =<0 -minA is de ned. For am there is [<m ] 2 X with
<m <0 am :
For [<1 ] 2 X0 we have <1 <0 x for an x 2 A; therefore
[<m ] = [<0 am ]  [<0 x] = [<1]:
For [<1 ] 2 X n X0 we have
[<m ] = [<0 am ]  [<0]  [<1 ]:
So we have [<m ] = min X:

Now for the rest of this section we denote ordinals by ; ; ; 0; : : : Besides we use
the following notation
(
 =  z ifif [ <] [ ] and [ z] = with z 2 d():
This de nition makes sense since we have for < [] exactly one z 2 d() such that
[ z] = : Using this notation we have for an arbitrary well-ordering  with [] 
[ ] = :
Lemma A.2.12. For all 2 On we have = [f 2 On : < g]:
Proof. Take  with = []: Now de ne
 : f 2 On : < g ! d()
by ( ) = z i [ z] = : By the upper remark  is well-de ned.  is order preserving
because if we take 1 < 2 < we have
[ z1 ] = 1 < 2 = [ z2 ] ) z1  z2 :
 is onto since for z 2 d() it is [ z] < and ([ z]) = z: By this observation we
have
[f 2 On : < g] = [] = :
A.2. Naive Theory of the Ordinals 195

## By this lemma we have a canonical representative

f 2 On : < g
of the ordinal : We are going to identify these objects. Thus every element of an
ordinal is an ordinal and < coincides with 2.
Proposition A.2.13.
a) ; is an ordinal. It is ; = minOn:
b) For every 2 On; [ f g is an ordinal, the successor of , i.e. there is no
2 On with < < [ f g:
c)  and  imply  :
d) It is  :
e) If it is  and  , then it is = :
f) It is  or  :
De nition A.2.14.
a) 2 On is a successor ordinal if there is an ordinal such that [ f g:
b) 2 On is a limit ordinal if =
6 0 = minOn and is not a successor.
To make this de nition more visible think of well-orderings. Typically successor ordi-
nals are obtained by well-orderings of the form
| |  |{z |   } |  = [ f g

i.e. by well-orderings with a maximal element. A well-ordering not of this shape is the
usual <-relation on IN
0 | 1 | 2    | n |   
This well-ordering represents a limit ordinal. Now we are going to write + 1 instead
of [ f g:
Proposition A.2.15. If 6= 0 the following assertions are equivalent
1: is a limit ordinal.
2: For all < it is + 1 < :
3: = supf : < g:
Proof.
1: ) 2: Suppose < : Then it is + 1  : If + 1 = , then is not a limit
ordinal. So it is + 1 < :
196 Appendix
2: ) 3: Let = supf : < g: Then it is  : But if < , then it is + 1 <
and so + 1  : Contradiction.
3: ) 1: Assume that is not a limit ordinal. Then there is a such that + 1 = :
Hence
= supf : < g = = + 1:

At this point we are able to prove the facts for ordinals used in section 1.4.
Theorem A.2.16 (Trans nite induction). If for all 2 On we have
8 < F ( ) ! F( );
then we have F( ) for all 2 On:
Proof. This follows directly by Lemma A.2.2 and Lemma A.2.11.
Corollary A.2.17. If we have
1: F(0)
2: 8 (F( ) ! F( + 1))
3: for every limit ordinal 8 < F( ) ! F( );
then we have F( ) for all 2 On:
Proof. By trans nite induction on we proof 8 F( ): Thus we have
8 < F ( )
as induction hypothesis. Now we have to show F( ): If = 0 we have F( ) by 1: If
is a successor, say = + 1, then it is < and we have F( ) by the induction
hypothesis. But then F( ) follows by 2: If is a limit ordinal F( ) follows directly
from 3: and the induction hypotheses.
Now we turn to de ne functions by trans nite recursion. We want to de ne a function
F on the ordinals in such a way that we are allowed to de ne F( ) for 2 On in
the terms of and all the F( ) for < : All those F( ) are contained in the single
object
F  = F  f : < g:
Theorem A.2.18 (Trans nite recursion). If G is a binary function, then there is
a uniquely determined function F such that
F( ) = G( ; F  )
holds for all 2 On:
A.2. Naive Theory of the Ordinals 197
Proof. First we prove that F is uniquely determined. Therefore let F 0 be a function
with
F 0( ) = G( ; F 0  )
holds also for all 2 On: We prove
8 (F( ) = F 0( ))
by trans nite induction on : By the induction hypothesis we have
8 < (F( ) = F 0( )):
So it is F  = F 0  and therefore
F( ) = G( ; F  ) = G( ; F 0  ) = F 0( ):
This nishes the induction. It remains to prove the existence of the function F: We
prove that for every 2 On the function F  exists by induction on : If = 0, then
it is
F 0 = F ; = ;
and if = + 1 we have by induction hypothesis the function F  : Now we de ne
for < (
F  ( ) = F  ( ) if <
G( ; F  ) if = :
And if is a limit ordinal we have by induction hypothesis for every < a function
F  : Now de ne for <
F  ( ) = F  + 1( ):
This is de ned since if it is < so we have + 1 < : Now we can prove that the
function F de ned by
F( ) = F  + 1( )
comes up to
F( ) = G( ; F  )
by induction on : This is left to the reader.
Corollary A.2.19. There is a uniquely determined function F such that
1: F(0) = X
2: F( + 1) = G( ; F( ))
3: F( ) = H( ; F  ) for limit ordinals ;
where X is a set and G; H are given functions.
198 Appendix
Proof. De ne for 2 On and functions f de ne the function G0 by
8
>
<X if = 0
G0( ; f) = >G( ; f( )) if = + 1
:H( ; f  ) if is a limit ordinal
and apply Theorem A.2.18.
By this corollary we have proved all the facts we presented in section 1.4.
A.3 Cardinal Numbers
In section 1.4 we met cardinals in the form of special ordinals. Here we will give a short
introduction to the theory of cardinals and elementary cardinal arithmetic using the
naive theory of the ordinals. Studying cardinals means to study sets with respect to
quantitative aspects. For nite sets we are used to it. But are there di erent types of
in nite sets? This theory is a theory of in nity. It was developed by Georg Cantor
at the end of the last century.
De nition A.3.1. The sets A; B are called equinumerous if there is a one-one map
from A to B. In this case we write A  B.
Proposition A.3.2.  is an equivalence relation.
We have the following equinumerous sets.
Lemma A.3.3. Let A; B; C; D be sets. Then it is:
a) If A  B and C  D, then
 AC  B D
 Pow(A)  Pow(B)
 C A  DB
b) If A \ B = ;, then C A[B  C A  C B .
c) It is C AB  (C B )A .
d) It is Pow(A)  f0; 1gA.
Proof. We leave a) and c) to the reader. In b) we have to de ne a one-one map
F : C A[B onto
! CA  CB:
So let f 2 C A[B , i.e. f : A [ B ! C. Then we de ne F(f) = (f  A; f  B). So F is
one-one and since A \ B = ; we also have that F is onto. This proves b). In d) we
just have to map every subset to its characteristic function.
To give just one example concerning the real numbers R we have
R  Pow(IN)  ININ  f0; 1gIN
and the interested reader is requested to prove this. Using Proposition A.3.2 we can
imagine cardinals as equivalence classes.
A.3. Cardinal Numbers 199
De nition A.3.4. The equivalence classes of the relation  are called cardinals. The
equivalence class of the set A is denoted by card(A).
Now we are going to install an ordering on the class of all cardinals.
De nition A.3.5. Let A; B be sets. Then we de ne A 4 B if there is a one-one map
from A to B.
Using the well-ordering theorem the following theorem is just a proposition. But it
still remains true if the well-ordering theorem is not available. In that context it is
called Cantor-Bernstein theorem.
Theorem A.3.6. A 4 B and B 4 A imply A  B.
Proof. We will only give a sketch of the proof not using the well-ordering theorem.
First we prove a special version of this theorem. Therefore let A  B  C with A  C
be given. Then we will show B  C.
Since C  A we have a one-one map f from C onto A. Now de ne
B0 = B; Bn+1 = ff(b) : b 2 Bn g
C0 = C; Cn+1 = ff(c) : c 2 Cng
with n 2 IN. Then g : C ! B de ned by

g(x) = f(x) if there is an n 2 IN with x 2 Cn n Bn
x otherwise
is a one-one map from C onto B, i.e. B  C.
Now we will prove the general version of the theorem. Since A 4 B and B 4 A we
have one-one maps f : A ! B and g : B ! A. But then
A0 = fg(f(a)) : a 2 Ag  fg(b) : b 2 B g  A
and A  A0 (with g  f one-one and onto). So we get by the above observations
A  fg(b) : b 2 B g  B:
The relation 4 can be transferred to the cardinals.
Proposition A.3.7. The relations 4 and  are compatible.
Proof. Let A; B; C; D be sets with A 4 B  C 4 D. But then A 4 D using
composition of functions. But we wanted this to be proved.
By the well-ordering theorem there is an ordinal in every equivalence class w.r.t. ,
i.e. in every cardinal. So the least such ordinal will be a canonical representative for
that equivalence class. From now on we will identify a cardinal with its canonical
representative, i.e. the class of all cardinals is a subclass of the ordinals. Furthermore
4 and  coincide.

## Proposition A.3.8. For all ordinals it is card( )  :

200 Appendix
Proposition A.3.9. If  is a cardinal with   !, then  is a limit ordinal.
Proof. If  = + 1 > !, then one can construct a one-one map from  to , which
means card( ) = . This contradicts A.3.8
Now we will present a famous result by G. Cantor of 1892. Using this result we can
produce larger and larger cardinals.
Theorem A.3.10. For every set A it is card(A) < card(Pow(A)).
Proof. Since the function f : A ! Pow(A) given by f(a) = fag is one-one, we have
card(A)  card(Pow(A)):
Now assume card(A) = card(Pow(A)). Then there is a one-one map
g : A onto
! Pow(A):
But then de ne (diagonalisation!)
B = fa 2 A : a 2= g(a)g:
We claim B 2= rg(g) contradicting the fact that g was assumed to be one-one. Therefore
suppose B 2 rg(g). Then there is an a 2 A such that
g(a) = B:
If a 2 B, then by de nition of B
a 2= g(a) = B:
This is a contradiction. If a 2= B, then we get by de nition of B
a 2 g(a) = B:
In either case we get a 2 B and a 2= B, which is impossible.
Now we turn to cardinal arithmetic, i.e. we want to determine the cardinality of the
sets
A + B and A  B
where A and B are sets and A + B is the disjoint union
(A  f0g) [ (B  f1g)
of A and B.
De nition A.3.11. Let  and  be cardinals.
a) De ne    = card( + ) the cardinal sum of  and .
b) Let 
 = card(  ) be the cardinal product of  and .
A.3. Cardinal Numbers 201
For nite cardinals, i.e. cardinals < ! (and in fact all natural numbers are cardinals),
cardinal sum and product coincide with the usual addition and multiplication. In
general we have the following properties of cardinal sum and product.
Lemma A.3.12. Let ; ;  be cardinals. Then we have the following
a) (  )   =   (  ) and (
)
 = 
(
)
b)    =    and 
 = 

c)   0 = ; 
0 = 0 and 
1 = 
d)   0 and   0 imply     0  0 and 
  0
0
e) 
(  ) = (
)  (
)
De nition A.3.13. De ne the canonical ordering <c on On  On by
( ; ) <c ( 0 ; 0) i max( ; ) < max( 0; 0 ) _
(max( ; ) = max( 0; 0 ) ^ < 0) _
(max( ; ) = max( 0; 0 ) ^ = 0 ^ < 0 )
with ordinals ; 0; ; 0 .
Lemma A.3.14. <c is a well-ordering on On  On.
By  : On  On ! On we will denote the order isomorphism between (On  On; <c)
and (On; <). It will be the main tool in determining cardinal sums and products. But
rst we turn to the @-function.
De nition A.3.15. Let @ : On ! Card be the enumeration function of the in nite
cardinals, i.e. we have the following properties:
1. If < , then !  @ < @ :
2. If @  < @ +1 , then card( ) = @ .
3. If   ! is a cardinal, then there is an 2 On with @ = .
Lemma A.3.16.   @  @ is a one-one map from @  @ onto @ .
Proof. We prove the lemma by induction on . By the de nition of the canonical
ordering of On  On, @  @ is an initial segment of On  On. Therefore  maps
@  @ onto an ordinal 2 On. We have to prove = @ . If < @ ; we get
@  @
@ = card( ) < @ :
This is a contradiction. If > @ , then @ = ( ; ) with ;  < @ , since  is onto.
Now de ne
 = max( ; ) + 1 < @ ;
since in nite cardinals are limit ordinals. By the de nition of the canonical ordering
we get
 1[@ ]    ;
202 Appendix
hence @  card()
card(). This implies card()  !. Therefore it exists an  2 On
with @ = card()   < @ . But this means  < and we may use the induction
hypothesis to obtain   @  @ is a one-one map from @  @ onto @ : This implies
@  card()
card() = @
@ = @ < @
Corollary A.3.17. @
@ = @ .
Theorem A.3.18. Let ;  be in nite cardinals. Then we have
   = 
 = max(; ):
Proof. Since ;  are in nite, there are ; 2 On with  = @ and  = @ . Without
loss of generality it is max(@ ; @ ) = @ . Then we have
@  @  @  @
@  @
@ = @ ;
where we used Corollary A.3.17 for the equality symbol.
Now having detected cardinal sum and product as trivial, we close our very short
exposition of cardinals.
Bibliography

Historical Texts
It follows a list of the books mentioned in the historical remarks.
Boole, G.
1847 The mathematical analysis of logic, reprinted 1947, Blackwell: Oxford, 82 pp
1854 The laws of thought, reprinted 1958, Dover, xi+424 pp
Frege, F.L.G.
1879 Begri sschrift, eine der arithmetischen nachgebildete Formelsprache des rei-
nen Denkens, reprinted 1964 in: Begri sschrift und andere Aufsatze, Olms,
xvi+124 pp
Leibniz, G.W.
1666 De arte combinatoria, reprinted 1858 in: Leibnizens mathematische Schriften,
Halle, pp 7{79
Peano, G.
1897 Formulaire de mathematiques, vol. 2, chap I, Bocca & Clausen, 63 pp
1910 Principia mathematica, vol. I, 2nd ed. reprinted 1957, Cambridge University
Press, xlvi+674 pp
1912 Principia mathematica, vol. II, 2nd ed. reprinted 1957, Cambridge University
Press, xxiv+772 pp
1913 Principia mathematica, vol. III, 2nd ed. reprinted 1957, Cambridge University
Press, x+491 pp
Original Articles
In the following the original papers concerning important theorems of this book are
listed.
Church, A.
1936 A note on the `Entscheidungsproblem', Journal of Symbolic Logic 1, pp 40{41
and pp 101{102
204 Bibliography
Craig, W.
1957 Linear reasoning. A new form of the Herbrand-Gentzen theorem, Journal of
Symbolic Logic 22, pp 250{268
1957 Three uses of the Herbrand-Gentzen theorem in relating model theory and proof
theory, Journal of Symbolic Logic 22, 269{285
Gentzen, G.
1935 Untersuchungen uber das logische Schliessen, I, II, Mathematische Zeitschrift
39, pp 176{210 and pp 405{431
Godel, K.
1930 Die Vollstandigkeit der Axiome des logischen Funktionenkalkuls, Monatshefte
fur Mathematik und Physik 37, pp 349{360
 formal unentscheidbare Satze der `Principia Mathematica' und verwand-
1931 Uber
ter Systeme, Monatshefte fur Mathematik und Physik 38, pp 173{198
Henkin, L.
1949 The completeness of the rst-order functional calculus, Journal of Symbolic
Logic 14, 159{166
1954 A generalization of the notion of !-consistency, Journal of Symbolic Logic 19,
pp 183{196
Herbrand, J.
1930 Recherches sur la theorie de la demonstration, Trauvaux de la Societe des
Sciences et lettres de Varsovie, Classe III sciences mathematique et physiques
33, 128 pp
Hilbert, D. & Bernays, P.
1934 Grundlagen der Mathematik, vol. I, 2nd ed. reprinted 1968, Springer; xv+473
pp
1939 Grundlagen der Mathematik, vol. II, 2nd ed. reprinted 1970, Springer, xiv+561
pp
Kleene, S.C.
1936 A note on recursive functions, Bulletin of the American Mathematical Society
42, pp 544{546
1936 General recursive functions of natural numbers, Mathematische Annalen 112,
pp 727{742
1938 On notations for ordinal numbers, Journal of Symbolic Logic 3, pp 150{155
Lowenheim, L.
 Moglichkeiten im Relativkalkul, Mathematische Annalen 76, pp 447{470
1915 Uber
Lyndon, R.C.
1959 Existential Horn sentences, Proceedings of the American Mathematical Society
10, pp 994{998
Original Articles 205
Mal'cev, A.I.
1936 Untersuchungen aus dem Gebiete der mathematischen Logik, Matematicheskij
Sbornik, n.s., Akademiya Nauk SSSR i Moskovskoe Matematicheskoe Obshch-
estvo 1 (43), pp 323{336
Mostowski, A.
1947 On de nable sets of positive integers, Fundamenta mathematicae 34, pp 81-112
Orey, S.
1956 On !-consistency and related properties, Journal of Symbolic Logic 21, pp 246{
252
Peter, R.
 den Zusammenhang der verschiedenen Begri e der rekursiven Funktio-
1934 Uber
nen, Mathematische Annalen 110, pp 612{632
1935 Konstruktion nichtrekursiver Funktionen, Mathematische Annalen 111, pp 42{
60
Post, E.L.
1921 Introduction to a general theory of elementary propositions, American Journal
of Mathematics 43, pp 163{185
Robinson, R.M.
1947 Primitive recursive functions, Bulletin of the American Mathematical Society
54, pp 925{942
Rice, H.
1953 Classes of recursively enumerable sets and their decision problems, Transac-
tions of the American Mathematical Society 74, pp 358{366
Rosser, J.B.
1936 Extensions of some theorems of Godel and Church, Journal of Symbolic Logic
1, pp 87{91
Skolem, T.A.
1920 Logisch-kombinatorische Untersuchungen uber die Erfullbarkeit oder Beweis-
barkeit mathematischer Satze nebst einem Theorem uber dichte Mengen, Norske
Videnskaps - Akademi i Oslo, Mathematisk-Naturvidenskapelig Klasse, Skrifter
4, pp 1{36
1923 Begrundung der elementaren Arithmetik durch die rekurrierende Denkweise
ohne Anwendung scheinbarer Veranderlichen mit unendlichem Ausdehnungs-
bereich, Norske Videnskaps - Akademi i Oslo, Mathematisk-Naturvidenskapelig
Klasse, Skrifter 6, pp 1{38
206 Bibliography
Tait, W.W.
1968 Normal derivability in classical logic, in: The Syntax and Semantics of In-
nitary Languages, editor: Barwise, J., Lecture Notes in Mathematics 72,
Springer, pp 204{236
Tarski, A. & Vaught, R.L.
1957 Elementary (arithmetical) extensions, Summaries of Talks Presented at the
Summer Institute for Symbolic Logic, Institute for Defence Analyses, CRD,
pp 160{163
Turing, A.M.
1936 On computable numbers, with an application to the Entscheidungsproblem, Pro-
ceedings of the London Mathematical Society, ser. 2, 42, pp 230{265 and
corrections ibid. 43, pp 544{546
Text Books
ical logic. Of course, this must be only a collection of some good texts in mathematical
logic.
General Logics
Barwise, J. (ed.)
1977 Handbook of mathematical logic, North-Holland, xi+1165 pp
This book has an encyclopedic character with advanced texts concerning var-
ious topics of all areas of mathematical logic.
Shoen eld, J.R.
1967 Mathematical logic, Addison-Wesley, vii+344 pp
This is the standard reference for standard theorems in mathematical logic. It
goes far beyond this book and is a good addition to it.
Recursion Theory
Hinman, P.G.
1978 Recursion theoretic hierarchies, Springer, xii+480 pp
This is an advanced text book dealing with de nability theory and hierarchies.
Odifreddi, P.
1989 Classical recursion theory, North-Holland, xviii+688 pp
This is a comprehensive written survey of recursion theory.
Rogers, H.
1967 Theory of recursive functions and e ective computability, McGraw-Hill, xxi+
482 pp
This is the standard reference for recursion theory. It is written quite illustra-
tively.
Text Books 207
Model Theory
Chang, C.C. & Keisler, H.J.
1990 Model theory, 3rd ed., North-Holland, xvi+650 pp
This is the standard text book for model theory. It covers besides pure logic
Set Theory
Jech, Th.
1978 Set theory, Academic Press, xiv+391
This book is the standard reference in set theory.
Levy, A.
1979 Basic set theory, Springer, xiv+391 pp
This is a soft, comprehensive introduction to set theory.
Proof Theory
Girard, J.Y.
1987 Proof theory and logical complexity, vol. I, Bibliopolois, 503 pp
This book is a thorough introduction to proof theory of Peano arithmetic.
Pohlers, W.
1989 Proof theory, Lecture Notes in Mathematics 1407, Springer, vi+213 pp
This is a straight forward introduction to a special eld of proof theory: im-
predicative proof theory.
Schutte, K.
1977 Proof theory, Springer, xii+299 pp
This book contains the basic proof theoretical results in impredicative proof
theory.
Takeuti, G.
1987 Proof theory, 2nd ed., North-Holland, x+490 pp
This book contains besides hardly readable chapters a good survey of proof
Others
Hurd, A.E. & Loeb, P.A.
1985 An introduction to nonstandard real analysis, Academic Press, xii+232 pp
This is as mentioned in the text.
208 Bibliography
Glossary

Notational Conventions
; the empty set, 3
IN the natural numbers, 3
dom(f) domain of the function f, 3
rg(f) range of the function f, 3
f Z restriction of f to Z, 3
XY the set of all functions from Y to X, 3
Pow(X) power set of X, 3
X nY the set X without Y , 3
X [Y union of X and Y , 3
X \Y intersection of X and Y , 3
idX the identity map on X, 3
Chapter I
^ and, 6,7
_ or, 6,7
: not, 6,7
! implies, 6,7
8 for all, 6,7
9 there is, 6,7
C constant symbols, 7
F function symbols, 7
P predicate symbols, 7
L rst order language, 7
L(C ; F ; P ) rst order language with non-logical symbols speci ed, 7
FV(t) free variables of the term t, 8
FV(F) free variables of the formula F, 8
BV(F) bounded variables of the formula F, 8
: truth function for negation, 11
^ truth function for conjunction, 11
_ truth function for disjunction, 11
! truth function for implication, 11
B (A) value of the sentential form A under the boolean assignment B , 12
AB equivalence of sentential forms, 13
210 Glossary
S structure, 17
C constants of a structure, 17
F functions of a structure, 17
P predicates of a structure, 17
V set of all variables, 18
tS [] value of t under the assignment  in the structure S , 18
ValS (F; ) truth value of F in S under the assignment , 19
x  and di er at most at x, 19
S j= F[] F is true in S under , 19
S 6j= F[] F is false in S under , 19
Fx (t) F with x replaced by t, 20
sx (t) s with x replaced by t, 20
=i:h: is equal to (by the induction hypothesis), 21
LS L extended by constant symbols for elements of S, 22
SS S expanded by constants for S, 22
S j= F F is valid in S , 23
j= F F is valid, 23
AxGT axioms of group theory, 23
S j= F[a1; : : :an] F is valid in S under the assignment a1 ; : : : ; an, 24
F S G semantical equivalence of formulas, 24
\$ if and only if, 24
PP(F) propositional parts of F, 27
PA(L) propositional parts of L, 27
PA(F) propositional atoms of F, 27
FB truth value of F under the boolean assignment B , 28
B boolean assignment induced by , 28
BM boolean assignment induced by the set M, 30
! rst limit ordinal, 32
card(M) cardinality of M, 32
@0 cardinality of the natural numbers, 32
c9xF Henkin constant for 9xF , 35
LH Henkin extension of L, 38
KH Henkin constants, 38
degH (c) Henkin degree of c, 38
degH (F) Henkin degree of F, 38
HL Henkin set, 38
L1  L2 sub-language, 38
SH LH -expansion of S , 49
S j= M S is a model of M, 40
card(L) cardinality of the language L, 40
M j= F F is logical consequence of M, 46
F1; : : : ; Fn j= F F is logical consequence of F1 ; : : : ; Fn, 46
M `F M proofs F, 49
LT Tait-language for L, 54
P predicate symbols in the Tait-language for complements, 54
Glossary 211
F negation of F in the Tait-language, 55
T proof of in the Tait-calculus, 55
: negation of , 56
S j=   is valid in S , 56
r reductum of , 58
dom() length of the sequence , 59
!<! the number sequences, 59
  is initial segment of , 59
S search tree for , 60
FN prenex form of F, 68
FH Herbrand form of F, 69
LH Herbrand language, 69
? empty formula, 74
> dual of ?, 74
>-Ax  proof in the extended Tait calculus, 74
FT translation of F into the Tait-language, 76
F operator induced by F, 78
LI language with identity, 82
Id0 open form of the identity axioms, 83
M Id F logical consequence respecting LI-structures, 84
Id  proof with identity axioms, 86
Idt term form of the identity axioms, 87
Chapter II
L(T) language of T , 93
T(L) extension of T by de nition, 95
Th(S ) theory of S , 99
' : S1 ,! S2 embedding, 99
' : S1  S2 elementary embedding, 99
S1  S2 substructure, 99
S1  S2 elementary substructure, 99
Diag(S ) diagram of S , 100
S1 = S2 the structures S1 and S2 are isomorphic, 101
Con(Ax) logical consequences of Ax, 102
ModL (T ) model class of T , 102
card(T) cardinality of T, 103
Chapter III
Ckn constant function with value k, 116
Pkn kth projection, 116
S successor function, 116
Sub(f; g1 ; : : : ; gn) substitution, 116
R(g; h) recursor, 116
212 Glossary
pd predecessor function, 117
_ arithmetical di erence, 117
~x:f(~x; y~) functions taking arguments for ~x, 118
sg sign function, 119
sg dual function to sg, 119
R characteristic function of R, 119
xz bounded -operator, 120
p(n) nth prime, 125
(x; y) multiplicity of p(y) in the factorisation of x, 125
<> code of the empty sequence, 125
hz0 ; : : : ; zni coded sequence, 125
(x)i ith component of a coded sequence, 125
Seq coded sequences, 125
lh(x) length of a coded sequence, 126
x_ y concatenation of coded sequences, 126
f course-of-values function, 126
CPR codes of primitive recursive functions, 128
UPR universal function for primitive recursive functions, 129
f : INn P! IN partial function, 130
f(x) " f is unde ned at x, 130
f(x) # f is de ned at x, 130
f(n) ' g(n) partial equality, 131
f unbounded -operator, 131
P unbounded -operator, 131
CP codes of partial recursive functions, 132
Cmp computation predicate, 132
fegn (~z) partial recursive function with code e, 133
T n (e; ~z; z) Kleene's T-predicate, 133
n universal partial recursive function, 134
Snm Snm -predicate, 135
Wen recursive enumerable set with index e, 138
Gf graph of f, 140
K diagonal set, 141
INC increase function of a RAM, 145
DEC decrease function of a RAM, 145
BEQ branch function of a RAM, 145
MP marks of the programme P, 145
P transition function, 146
iP iterated transition function, 146
FP;~x formula describing P, 148
Chapter IV
LPA language of Peano Arithmetic, 153
PA Peano Arithmetic, 153
NT number theory, 154
Glossary 213
Chapter V
LI many-sorted language, 165
FV(t) free variables of t, 166
FVi (t) free variables of sort i in t, 166
SI many-sorted structure, 166
t+ translated term, 168
F+ translated formula, 168
OntI ontological axioms, 168
Ont ontological axioms, 169
L! language of !-logic, 172
S! !-structure, 172
! F calculus with !-rule, 173
! F validity in !-structures, 173
L2 second order language, 175
S2 second order structure, 175
L2PA language of second order Peano Arithmetic, 175
N constant symbol for the natural numbers, 175
PA2 second order Peano Arithmetic, 175
Lw2 weak second order language, 176
S2w weak second order structure, 176
214 Glossary
Index

## 8-axiom, 49 characteristic function, 119

admissible rule, 64 Church's thesis, 147
@-function, 32 Church, A., 147, 148
alphabet, 7 clause, 14
antecedent, 54 closed term, 9
Aristotle, 1 compactness theorem, 84
arity, 5, 7 2nd version, 46
8-rule, 49, 55 for rst order logic, 40
^-rule, 55 for many-sorted logic, 170
assignment, 18, 167 for propositional logic, 34
boolean {, 12, 28 for weak second order logic, 177
atomic formula, 9 complete
axiom !- {, 159
of choice, 31 axiom system, 102
system, 102 calculus, 48
axiomatizable, 109 set of connectives, 16
theory, 102
back-and-forth, 108 completeness theorem, 50
bar induction, 59 for many-sorted logics, 171
basic connective, 5, 7, 10, 11, 165
functions, 116 complete set of {s, 16
operations, 116 consistent, 23
Bernays, P., 127 !- {, 158
Boole, G., 2 sententially {, 29
boolean nitely { {, 29
assignment, 12, 28 maximally { {, 29
operation, 119 constant symbol, 7
boolean valid, 49 countable
bounded quanti cation, 119 language, 31
set, 33
calculus course-of-values
Gentzen style {, 54 function, 126
Hilbert style {, 53 recursion, 127
Tait style {, 54, 55 Craig, W., 75
Cantor, G., 108 cut-rule, 58
cardinal, 32
categorical, 102 De Morgan, A., 2
216 Index
deduction existential {, 66
logical {, 49 interpolation {, 72
de nition irreducible {, 58
explicit {, 78 free variable, 8
implicit {, 78 Frege, G., 2
inductive {, 9 function, 5
Descartes, R., 1 basic {, 116
diagram, 100 characteristic, 119
elementary {, 100 course-of-values {, 126
domain, 109 partial {, 130
of a function, 130 partial recursive {, 131
of a structure, 17 primitive recursive, 117
RAM computable {, 146
9-axiom, 49 recursive {, 131
9-formula, 66 symbol, 7
Ehrenfeucht, A., 112 total {, 131
elementary transition {, 145
equivalent, 101 truth {, 10, 11
class, 109 universal {, 134
diagram, 100
embedding, 99 generator, 111
extension, 99 Gentzen, G., 54
substructure, 99 Godel number, 128
embedding, 99 Godel, K., 2, 35, 50, 116, 141, 153, 158
elementary {, 99 grammar, 7, 9
end extension, 156 group, 6, 17, 109, 173
epimorphism, 83 theory, 7, 23
equivalent
elementary {, 101 Hamilton, W., 2
semantically {, 24 Hauptsatz, 64, 66
sentential {, 13 Henkin
well-orderings, 189 constant, 38
9-rule, 49, 55 degree, 38
ex falso quodlibet, 11, 46 extension, 38
expansion, 38 set, 35
extension, 94 Henkin, L., 35, 50, 173
by de nitions, 94 Herbrand
conservative {, 94 form, 69
elementary {, 99 language, 69
end {, 156 Herbrand, J., 66
Hilbert, D., 53, 127, 163
eld, 109, 188
xed point, 9 Id-axiom, 86
formula, 8, 166 Id-rule, 86
atomic {, 9 incomplete
empty {, 74 !- {, 159
Index 217
incompleteness theorem length, 59
rst {, 160, 161 letter, 7
second {, 160, 164 liar antinomy, 160
inconsistent limit ordinal, 31, 195
!- {, 159 linear ordering, 188
induction Lowenheim, L., 104
bar {, 59 Lowenheim-Skolem theorem
on the de nition, 9 downwards, 104
scheme, 153 for weak second order logic, 177
trans nite {, 32, 188, 196 for many-sorted logic, 170
inductive de nition, 9 upwards, 106
inference logic
boolean {, 50 classical {, 11
in nitesimal, 41 rst order {, 7
in x notation, 8 higher order {, 175
initial segment, 59, 191 intuitionistic {, 11
proper {, 191 many-sorted {, 165
instruction, 145 !- {, 172
interpolation formula, 72 S - {, 172
inversion, 56 second order {, 175
isomorphic, 101 third order {, 175
weak second order {, 176
joint consistency theorem, 75 logical consequence, 46
Lullus, R., 1
Kleene, S.C., 131, 133{138, 140{142 Lyndon, R.C., 76
-abstraction, 118 main formula, 55
language, 5 Mal'cev, A.I., 35, 50
countable {, 31 many-sorted
rst order {, 6, 7 language, 165
Herbrand {, 69 logic, 165
many-sorted {, 165 mark
of a theory, 93 identi cation {, 145
second order {, 6 start {, 145
Tait {, 54 stop {, 145
with identity, 82 transition {, 145
lattice, 109 Megarians, 1, 11
L-axiom, 49, 55 model, 40, 93
Leibniz, G.W., 1, 2, 115 class, 102
lemma theory, 2
diagonalisation {, 161 modus ponens, 46, 49
Herbrand's {, 90 monotone operator, 78
principal semantic {, 62 Mostowski, A., 137, 140
principal syntactic {, 61
Tarski's {, 107 node, 59
Zorn's {, 31, 34 topmost {, 59
218 Index
normal form part, 27
disjunctive {, 14 provable, 93
conjunctive {, 14 pure
theorem, 133 conjunction, 14
number sequence, 59 disjunction, 14
number theory, 41
quanti er, 6, 7
object, 5 existential {, 6
occurrence universal {, 6
positive, 76
negative, 76 random access machine, 145
!-complete, 159 recursion
!-consistent, 158 course-of-values, 127
!-incomplete, 159 simultaneous {, 127
!-inconsistent, 159 theory, 2
omitting types theorem, 112 trans nite {, 32, 196
operator recursive function, 131
globally monotone, 78 redex, 58
monotone, 78 regular structure, 176
ordering relation, 136
linear {, 188 primitive recursive {, 119
partial {, 34 recursive {, 136
ordinal, 31, 191 recursively enumerable {, 136
limit {, 31, 195 semi-recursive {, 136
successor {, 31, 195 retract, 38
Orey, S., 173 Rice, H., 142
_-rule, 55 Robinson, A., 75
Robinson, R.M., 130
partial root, 59
function, 130 Rosser, J.B., 137, 158
recursive function, 131 Russell, B., 2
path, 59
Peano Arithmetic, 153 satis able, 23
Peano, G., 2 Schutte, K., 58
Peter, R., 127, 136 search operator
Post, E.L., 134, 136, 140, 141 bounded {, 120
predicate, 5 unbounded {, 131
symbol, 7 search tree, 60
prenex form, 68 semantics, 10, 166
primitive recursive sentence, 10, 24
function, 117 sentential
relation, 119 connective, 7
proof theory, 2 equivalent, 13
proposition, 5 form, 12
propositional sequent, 54
atom, 27 set theory, 2
Index 219
She er stroke, 16 joint consistency {, 75
Skolem, T., 104, 116, 127 Lowenheim-Skolem {
soundness theorem, 49 downwards, 104
standard interpretation, 82 for weak second order logic, 177
Stoics, 1, 11 for many-sorted logic, 170
structural rule, 56 upwards, 106
structure, 17, 166, 172, 175, 176 Lyndon's interpolation {, 77
expanded {, 22 normal form {, 133
regular {, 176 !-completeness {, 174
sub-language, 38 !-soundness {, 173
substitution omitting types {, 112
primitive recursive {, 119 Post's {, 140
substructure, 99 recursion {, 135
elementary {, 99 Rice's {, 143
succedent, 54 Rosser's {, 158
successor ordinal, 31, 195 second incompleteness {, 160, 164
Syllogistic, 1 Snm - {, 134
symbol soundness {, 49, 57
auxiliary {, 7, 166 well-ordering {, 32
constant {, 7, 165 theory, 93
function {, 7, 165 total function, 131
non-logical {, 7 trans nite
predicate {, 7, 165 induction, 32, 188, 196
syntax, 10 recursion, 32
transition
Tait calculus, 54, 55 function, 145
Tait, W.W., 54 tree, 59
Tarski, A., 106, 164 well-founded {, 59
term, 5, 8, 166 truth
closed {, 9 function, 10, 11
theorem table, 11
Beth's de nability {, 78 value, 19
Church's {, 148 Turing, A.M., 134, 136, 147, 148
compactness {, 84 type, 110
2nd version, 46
for rst order logic, 40 universal function, 134
for many-sorted logic, 170 universe, 166
for propositional logic, 34
for weak second order logic, 177 valid, 23
completeness {, 50, 64 boolean {, 49
for many-sorted logic, 171 value
Craig's interpolation {, 75 of a formula, 19
deduction {, 46 of a term, 18
rst incompleteness {, 160, 161 truth {, 19
Herbrand's {, 71, 91 variable, 6, 7, 165
interpolation {, 72, 91 bounded {, 8
220 Index
free {, 8
propositional {, 11
Vaught's test, 106
Vaught, R.L., 106
well-ordering, 188
theorem, 32