## Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

You've reached the end of this preview. Sign up to read more!

Page 1 of 1

Netherlands

This book is concerned with logical systems, which are usually termed *type-free *or *self-referential *and emerge from the traditional discussion on logical and semantical paradoxes. We will consider non-set-theoretic frameworks, where forms of *type-free abstraction *and *self-referential truth *can consistently live together with an underlying theory of combinatory logic.

However, this is *not *a book on paradoxes; nor we aim at a grand logic à la Frege-Russell, inspired by a foundational program. We shall rather investigate type-free systems, in order to show that:

(i) there are rich theories of self-application, involving both operations and truth, which can serve as foundations for property theory and formal semantics;

(ii) these theories give new outlooks on classical topics, such as inductive definitions and predicative mathematics;

(iii) they are promising as far as applications are concerned.

This way of looking is justified by the history of the antinomies in our century. In spite of isolated foundational and philosophical traditions, the research arising from paradoxes has become progressively closer to the mainstream of mathematical logic and it has received substantial impulse during the last twenty years: a number of significant developments, techniques and results have been cropping up through the work of several logicians (see below for our main debts). Therefore a major aim of this book is to attempt a unifying view of relevant research in the field, by dwelling on connections with well-established logical knowledge and on applicable theories and concepts.

However, the present work is far from being comprehensive. We do not treat illative combinatory logic (with the exception of a system of **Ch. VI, investigated by Flagg and Myhill 1987), nor we deal with the Barwise-Etchemendy approach to self-reference via non-well-founded sets. Another significant direction, which is only touched upon in two sections of chapter XIII, is the systematic development of the general theory of semi-inductive definitions (in the sense of Herzberger, Gupta and others). **

The project started some years ago, when Prof. A. S. Troelstra kindly suggested an English translation of the author’s monograph (**Cantini 1983a) about theories of partial operations and classifications in the sense of Feferman (1974). The attempted translation soon shifted towards a thorough expanded revision of the old text, and eventually gave rise to an entirely new set of notes at the end of 1988. After a stop of almost two years, these notes were taken up again, fully rewritten and reorganized. The manuscript was submitted to the editor for final refereeing in October 1993. **

The content and the results of the present version are disjoint from the 1983 monograph; they partly overlap with the 1988 notes, except for a different choice of primitive notions and for the addition of **Ch. VI, parts of Ch. XIII and the epilogue. Ch. VIII offers a development of topics, contained in the author’s paper Levels of Truth (to appear in the Notre Dame Journal of Formal Logic, 1995): we thank the Editors for granting the permission of using parts of that paper in Ch. VIII of this book. **

*Acknowledgments. *The present work owes a great deal to the writings of several logicians, and even if I tried hard to make a complete list of my debts in the text and in the reference list, I am sure that there are omissions: I apologize for them.

As to the proper content of the book, pertaining to type-free abstraction and self-referential truth, I would like to underline my intellectual debt with the following papers (listed in alphabetical order): **Aczel (1980), Feferman (1974), (1984), (1991), Fitch (1948), (1967), Friedman and Sheard (1987), Kripke (1975), Myhill (1984), Scott (1975). **

Prof. W. Buchholz offered an invaluable help in correcting errors of any kind and in proposing technical improvements. I owe a special thank to him, also because the topics I dealt with were not touching his main research interests.

I am grateful to Prof. S. Feferman and to Prof. G. Jäger for keeping me informed over the years about their own research on type-free systems and proof theory, and for important advice. Jäger’s Ph.D. student, T. Strahm made useful critical comments on the first chapter.

Dr. R. Giuntini and Dr. P. Minari undertook the final proof-reading of **chapters I–VIII and XII–XIV; I warmly thank them for a host of useful remarks and corrections. **

I am deeply indebted to Dr. A. P. Tonarelli for proof-reading the remaining chapters and for eagle-eyed assistance in the unrewarding task of preparing the final manuscript.

Of course, I must stress that I am fully responsible for all errors, to be found in the whole work.

I am grateful to the Alexander-von-Humboldt Stiftung (Germany) for granting me a Wiederaufnahme

of a research fellowship at the Ludwig-Maximilians-Universität München in Sommer Semester 1991, when the present work was at a difficult stage.

Partial support to the present project was granted by the Italian National Research Council (CNR) and the Italian Ministry for University, Scientific Research and Technology (MURST).

Last but not least, this work is dedicated to my children Giulia and Francesco.

Firenze, April 1995

"*There never were set-theoretic paradoxes, but the property-theoretic paradoxes are still unresolved*" (K. Gödel, as reported by **J. Myhill 1984). **

"… *the theory of types brings in a new idea for the solution of the paradoxes, especially suited to their intensional form. It consists in blaming the paradoxes not on the axioms that every propositional function defines a concept or a class, but on the assumption that every concept gives a meaningful proposition, if asserted for any arbitrary object or objects as arguments*" (**K. Gödel 1944) **

The starting point of our investigation is the idea that the notions of *predicate application *and *property *are susceptible of independent study; in particular, these intuitive notions should be kept distinct from their counterparts of *set-theoretic membership *and *set*, as it is readily seen through a brief comparison.

According to the iterative conception, a set is always a collection of mathematical entities of a given type (possibly, sets of lower rank); thus it *has its being in its members*, and equality among sets is ruled by the extensionality principle. Sets are conceived as completed totalities, generated by *language independent operations and iterations thereof. *The membership relation is a *standard mathematical relation: *this means that *a *∈ *b *is a well-defined proposition, whenever *a *and *b *are sets. Moreover, if we reflect upon the intuitive picture of the cumulative hierarchy, we come to know that ∈ is well-founded and does not allow self-application.

By contrast, a property *is an abstract object, which is grounded in a concept, i.e. a function*, not in the objects which fall under it (**Frege 1984, p. 199); it has no a priori bound on its extension, and it usually depends on some sort of explicit or implicit finite specification. Properties satisfy the so-called unrestricted abstraction or comprehension principle (AP): every condition A(x) determines a property {x : A}, which applies to all and only those things of which A(x) holds true. **

Of course, on the face of the well-known paradoxes, *AP *introduces elements which are open to dispute and to multiform solutions; for instance, as Gödel’s citation suggests, the predication relation – henceforth η – cannot be always meaningful, and therefore the laws of standard (classical) logic cannot be valid.

The present approach, to be developed in various forms in this book, tries to keep the regimentation for predication and abstraction at a minimum; we maintain that {*x*: *A*} is an individual term and that η applies to statements possibly involving η itself. Thus we are looking for flexible, type-free theories of predication. More specifically, we are influenced by the tradition of illative combinatory logic in the sense of Curry and Fitch, by the work of **Feferman (1975) on partial classifications and of Aczel (1980) on Frege structures. The inspiring idea is that properties and predication can be adequately explained in terms of the primitive notions of function and truth. **

As to the notion of function, we cannot expect to deal with functions in set-theoretic sense. In fact properties, given in intension, may apply to anything in a given realm, without type restrictions; and the same must hold of the functions underlying the properties themselves. Thus we are driven to understand functions essentially as rules of constructions (or, in short, operations) in the sense of combinatory logic. In contrast to the set-theoretic conception, operations are prior to their graphs and have no a priori bound on their domain; in particular, *they do support non-trivial forms of self-application*. On this view, *it is natural to identify properties with object-correlates of functions, and to reduce the abstraction operation to familiar *λ-*abstraction; *formally, {*x*: *A*} simply becomes a λ-term of the form λ*x*[*A*], where [*A*] is a term of combinatory logic, canonically representing the function defined by the condition *A *(of any given language).

The second point concerns the reduction of predication to a primitive notion of reflective (or self-applicable) truth. Indeed, the expression

is analyzed as: "the result of applying the function represented by {*x*: *A*} to the argument *y *stand for the truth predicate, *y*η{*x*: *A*({*x*: *A*}*y*) (with juxtaposition of {*x*: *A*} and *y *as application), and the abstraction principle *AP *becomes obviously derivable from the basic law of λ-abstraction (i.e. we convert {*x*: *A*}*y *to the term [*A*[*x *:= *y*]], the result of replacing *x *with *y *in [*A*]).

Of course, these preliminary considerations do not solve the main problem of specifying the basic features of the truth predicate *T*. Nevertheless, they direct our attention towards the study of *simple mathematical objects*, namely expansions of combinatory algebras by reasonably closed truth sets. The typical structure (essentially) consists of a pair

is a subset of *M *.

These expansions are uniformly described by means of operators Γ from the power-set of *M *into itself, acting as *abstract valuation schemata*. Informally, if *X *⊆ *M*, *Γ*(*X*) represents the set of truths

we come to know by means of the semantic rules of Γ on the basis of *X*will be those subsets of *M*, that cannot be further extended with new information by means of Γ, i.e. the so-called *fixed points *of Γ, satisfying Γ(*X*) = *X*. Among these sets, a special role will be played by the minimal ones: they are technically the most interesting objects for the recursion-theoretic and proof-theoretic investigations. Conceptually, they reflect the idea that abstraction is not the mere description of an independent logical realm, but rather a process with its own logic implicit in Γ.

as an abstract syntax, in which formal languages can be processed and defined. In particular, elements of *M *may be thought of as symbolic expressions, to be combined and identified according to the operations and laws of combinatory logic. *M *will typically include (notations for) natural numbers or any other chosen ground type, but also and most important, objects representing functions. The objects associated to computable functions can be seen as (functional) programs, implementing effective algorithms. On the other hand, still pursuing the computational analogy, properties – as representatives of (generally non-computable) propositional functions – can be considered as programs implementing a sort of *generalized algorithms*. While application of an effective algorithm to an input produces a *computation*, possibly converging to *a value*, a property {*x: P*(*x*)} is ultimately applied to an object, in order to produce a *verification *.

We wish to conclude by raising three conceptual points. First of all, the notion of truth *T *is not understood as a formalized truth predicate in the usual metamathematical sense: *T *classifies *the objects of a combinatory algebra*, and not an *inductively defined *collection of sentences! In this sense, *T*, like set-theoretic membership, *does not depend upon a specific language*+, *the predicate T is a primitive concept, which is prior to the specification of any formalism and is the source of the abstract notion of proposition*, which carry information and can be called propositions; they can be partitioned into atomic or complex. Atomic propositions are simply grasped and reflect implicit (synthetic) knowledge, to be accepted as given. On the other side, complex propositions correspond to some sort of construction via logical operators; thus they require a(n analytic) process, in order to be understood (think of the search for verification), and they are controlled by the truth predicate *T*.

As a second point, we like to stress the importance of having *operations acting on classifications*. Indeed, the fact that operations and classifications live together has the consequence of a symmetry, lacking to set theory: not only we can *classify *operations, but we can *operate *on classifications. So we can treat classifications, which depend on parameters, as operations; this is generally impossible in set theory. It follows that many constructions and statements acquire an explicit character

and greater uniformity.

A final comment is left for the choice of non-extensional basic notions. In general, even if we make use of intensional data (like definitions or enumerations), we never appeal to specific features of them, and thus we obtain results with an *intrinsic *character. Moreover, we find that the non-extensional language helps to avoid strong logical principles

and to carry out proofs in rather weak systems (just as remarked in **Kreisel 1971, p. 170); it often permits uniform and explicit statements of the results obtained, which do not obscure the appreciation of proper extensional aspects. On the contrary, non-extensional and extensional features are free to interact in a unified framework. As it will be clarified by the introduction of the approximation structure in chapter III and its applications in the subsequent chapters, the essential interplay of these aspects leads to rather smooth generalizations of the Myhill-Shepherdson theorem (§17), to the appreciation of extensional choice principles (§20), and to a satisfactory internal treatment of inductive definitions (boundedness and covering; §23). **

As we previously explained, the starting point of the book is the need for an independent logical approach to the notions of predicate application, property, abstraction, truth. The arrangement of the material reflects the increasing logical complexity of the truth notions that are met in the text. The different proposals, though generally motivated by model-theoretic constructions, are developed in axiomatic style. This is mainly because we wish to emphasize the connections with standard concepts of mathematical logic and deductive systems for (substantial parts of) mathematics.

Proof-theoretic considerations and conservative extension results play an important role in classifying the various systems: very loosely, we tend to stress the importance of frameworks not stronger than Peano arithmetic and to distinguish predicative from impredicative systems. We also underline that type-free systems should not be opposed to type theories; we regard the former as a sort of generalized type assignment systems, in which types are left implicit and emerge from the theory itself.

More concretely, the book is divided into five parts, which group together relatively homogeneous topics. The read thread can be described as follows. By and large, the first three parts form a sort of independent essay on a first-order theory of reflective truth over combinatory logic, whose truth axioms essentially stem from Fitch’s extended basic logic (**Fitch 1948) through Scott (1975) and Aczel (1977). The notion of reflective truth explicitly refers to Feferman (1991). After the general results of Part A, the theory is motivated and enriched by means of recursion-theoretic investigations (part B), by showing its unifying power and studying its semantics (part C). Parts D and E explore alternative routes. Part D deepens the intuitions underlying the systems of parts A–C by use of proof-theoretic techniques and by relativizing the concept of truth. Part E is experimental in character and scans over a variety of approaches, which are still subject of investigation. **

To give the reader a closer idea of what is in the book, we shall survey the content of the single chapters. A more detailed account can be found in the introductory section to each chapter.

**Part A: **it offers a general introduction to the basic notions of operation and reflective truth. The basic aim is to illustrate, both axiomatically and semantically, a consistent notion of type-free logical structure, which will be fundamental in the whole book.

The opening chapter contains the necessary preliminaries on (expanded) combinatory logic, which is here taken as the core of a classical first-order theory of operations OP. There is an introduction to concrete models of OP, as they form the ground structures in the entire book.

In the second chapter, we inductively expand combinatory algebras with a notion of self-referential truth, which naturally generalizes the familiar Tarskian semantical clauses, in order to cope with a situation of partiality. The given expansions only depend on the isomorphism type of the underlying combinatory algebras. By inspection of the model-theoretic construction, we are led to a minimal axiomatic first-order system MF−, which contains a version of the Kripke-Feferman axioms for reflective truth and yields a theory of partial and total properties (= classes henceforth), satisfying natural closure conditions. For instance, classes are provably closed under Feferman’s join and elementary comprehension principles; moreover, MF− is provably closed under inductive definitions (though not capable of showing the corresponding induction schemata). We also consider extensions of MF− with various number-theoretic induction principles.

**Part B: **we show that there is a two-sided link between generalized recursion theory and languages with operations and self-referential truth. Not only inductive definitions are crucial for building up models of self-referential languages, but these languages offer smooth formulations of non-trivial definability results.

In **in the sense of Moschovakis (1974). The recursion-theoretic approach suggests to extend the minimal system by simple approximation conditions on properties. The new axioms, together with MF−, a powerful generalized induction schema GID and number-theoretic induction for classes, form an axiomatic system PW c+GID, which is still conservative (actually proof-theoretically reducible to) over the theory of operations and hence over Peano arithmetic. **

In **chapter IV we show that PW c+GID proves a number of interesting consequences (separation and reduction principles) and, above all, an analog of the Myhill-Shepherdson theorem for operations which are η-extensional (i.e. extensional with respect to the predication relation). The results can be restated in topological terms via a natural generalization of the positive information topology. **

In **chapter V, we produce models for admissible set theory and a boundedness theorem for inductive sets, again provably in PW c+GID. **

**Part C: it is a natural complement to the previous parts. In chapter VI, the reader will find two alternative type-free logics. The first system, due to Myhill (1972, 1980), relies on a logic with levels of implication. The second system, inspired by Aczel-Feferman (1980), offers a type-free logic with a definitional equivalence relation on formulas, which is inspired by conversion in combinatory logic. Both systems are formally interpreted in the theory PW c+GID of chapters IV–V. **

**Chapter VII offers a general outlook on the global structure FIX. We prove that FIXand that the set of sentences A such that TA holds in every structure of FIX, is axiomatizable. It is shown that FIX) is a very rich and intricate non-distributive complete lattice; a few applications to consistency results and to formal semantics are thereby outlined (see connection with Kripke 1975). **

**Part D: **it focuses on proof theory and the foundations of mathematics. We investigate a type-free logic TLR, which is able to internalize to a certain extent quantification on classes and negative semantic information. The intuitive idea is that truth is the (direct) limit of local self-referential truth predicates, which are related one another by a directed pre-order of levels.

Formally, we present TLR and its variants in **). **

In **chapter IX we develop the prerequisites for a proof-theoretic analysis of TLR: in particular, we describe a well-ordering proof of the so-called Feferman-Schütte ordinal. Chapter X proves that the theory of truth with levels is proof-theoretically reducible to (infinitary) theories of finitely iterated self-referential truth IT n∞; on the other hand, each ITn∞ is shown to be reducible to fragments of predicative analysis in chapter XI. The methods used include cut elimination for ramified systems in ω-logic and asymmetrical interpretations à la Girard. **

**Part E: **we are concerned with logics of truth and type-free abstraction, which are based upon non-reductive, non-truth functional semantical valuation schemata. In contrast to the reductive schema underlying the semantics of **chapter II, we study systems which are well-behaved with respect to logical consequence (e.g. a tautology is always classified as true; this does not work under a partial semantics a la Kleene). **

**Chapter XII investigates a minimal system VF endowed with a simple supervaluation monotone semantics; VF naturally justifies principles of generalized inductive definitions (in contrast to what happens with the theories of parts A–C, it yields a model of the theory of elementary inductive definitions ID1). We also develop an alternative interpretation for VF by means of proof-theoretic infinitary methods. **

**Chapter XIII addresses the problem of extending the logic of truth, as codified in VF. We discuss a refinement of supervaluation methods; but the new point is the introduction of semi-inductive definitions (in the sense of Herzberger 1982) and the application of the related notion of stable truth. We also consider consistent though ω-inconsistent logics of truth, due to Friedman, Sheard and Mc Gee. **

The epilogue (**chapter XIV) discusses prospective applications of type-free systems, as they result from the literature. In particular, we consider a logical theory of constructions, that has been investigated in Theoretical Computer Science and is strictly linked with the theories of part D. We conclude with a short survey of applications in other fields. **

The interdependence of the chapters is roughly indicated in the diagram below:

Certain parts of the book, once suitably combined, offer a non-conventional approach to:

1) generalized recursion theory and inductive definability (**part A + part B); **

2) predicative mathematics and subsystems of analysis (**part A + part B + + part D). **

If we disregard the recursion-theoretic and proof-theoretic parts, the book can serve as an introduction to:

3) formal semantics (**part A + III + part C + VIII (§§36–39) + part E). **

If the reader has in mind possible connections with logics for Artificial Intelligence, Theoretical Computer Science or semantics for natural languages, the suggestion 3) can be profitably modified to:

4) **part A + part B + VIII (§§36–39) + part E. **

Some chapters have appendices, containing additional details for proofs or suggestions for alternative routes: they can be always skipped without prejudice of understanding the later developments.

The text is intended for readers who are familiar with the topics usually covered in an advanced undergraduate or basic graduate logic course. Thus we assume acquaintance with the elements of first-order logic and model theory, recursion theory, set theory and proof theory, as they are developed in good standard textbooks, or in the corresponding chapters of the *Handbook of Mathematical Logic *(**Barwise 1977). **

In particular, it is useful to have a preliminary knowledge of the basic facts of hyperarithmetic and inductive definability (see **Aczel 1977a, Moschovakis 1974). For the proof theory of Chapters VIII–XI chapter VIII chapter IX chapter 10 chapter 11, a previous exposure to sequent calculi and ω-logic would be helpful (e.g. see Schwichtenberg 1977 or the textbooks of Takeuti 1975, Schütte 1977, Girard 1987, Pohlers 1989). The simple topological notions of Ch. IV can be obtained from any standard textbook in general topology. Ch. VII presupposes a few elementary facts about partially ordered sets and lattices, usually met in logic courses (consider the classical reference of Birkhoff 1967). In Ch. VIII we hinge upon some advanced results of admissible set theory, to be found in Barwise (1975), Hinman (1977); however, the basic definitions and results are briefly recalled there. **

A number of notations are adopted in the whole text. We summarize them below.

The book is structured in five parts from A to E; each part is subdivided into chapters; the chapters are organized in sections, which are numbered in progressive order. Within each section, each specific item (subsection, definition, remark, axiom, rule, theorem, lemma or corollary) is usually assigned a pair "*m.n* of numbers:

*m.n*" refers to the *n*th item of the *m*th section. Sometimes, for finer classifications and reference, we allow the use of three (and exceptionally four) numbers (e.g. 37.4.1 locates the first sub-item of the 4th-item of section 37). In some cases, we specify the class, which the referred item belongs to (e.g. we may speak of theorem 3.2 or definition 34.5).

References to publications are given by means of the author’s name followed by the year of publication, possibly followed by a letter in the case of more publications by the same author in the same year.

:= is used as the definition symbol (definiendum on the left of :=, definiens on the right), while = stands for literal identity, unless it is specified otherwise.

We adopt the standard notions of free and bound variable; FV(E) is the set of free variables of the expression E. *E*[*x *:= *t*] denotes the substitution of *x *with *t *in E. E(*E′*) means that *E′ *possibly occurs as a subexpression of *E*.

bind more than → and ↔. To enhance readability, dots may be used instead of brackets as separating symbols. *A *∧ *B*.→*C, A*→. *B*∨*C*, ∃*x.A *stand for (*A *∧ *B*) → *C, A *→ (*B*∨*C*) ∃*xA *(respectively); λ*x.ts *shortens λ*x*(*ts*), etc… Sometimes, we make use of bounded quantifiers as abbreviations: if *R *:= η, ∈, ∀*xRa.A*, ∃*xRa.A *shorten ∀*x*(*xRa*→*A*), ∃*x*(*xRa *∧ *A*). If bounded quantifiers are iterated, we write: (∀*xRa*)(∀*yRb*)(…), (∀*xRa*)(∃*yRb*)(…), or even ∀*xRa.*∀*yRb*.(…), ∀*xRa.*∃*yRb*.(…), for the proper

(respectively).

We shorten logical equivalence on the metalevel (i.e. if and only if

) by the standard iff

. "∃!*x* stands for

there is exactly one *x*". Sometimes, we adopt ⇒ as implication on the metalevel.

The logical complexity of any given formula *A *is the number of distinct occurrences of logical symbols in *A*.

We use the standard notations:

(*X*) (power set of *X*), *X *− *Y*, ⊂, ⊆, ⊃, ⫆, × (Cartesian product), *f: X*→*Y *(to be read as "*f *is a function from *X *to *Y*"), *cZ *(characteristic function for the set *Z*). If *k, m *∈ ω,

{*x*: …} is the set of objects satisfying the condition (…); {*a*1, …, *an*} is the set containing exactly the elements *a*1, …, *an*. 〈…〉 denotes set-theoretic *n*-tuple operation, unless otherwise specified.

We warn the reader that set-theoretic symbols will be sometimes adopted as abbreviations for corresponding non-extensional operations on properties and predicates. But possible ambiguities will be spared by the context. The arithmetical symbols are the standard ones.

*A *stands for "*A *".

*S **A *means that *A *is derivable from *S *by means of classical logic (unless otherwise specified).

We often carry out proofs by induction (either in the metatheory or within axiomatic theories). As a rule, we adopt the acronym IH as a shortening for induction hypothesis

.

**Part A **

Combinators and Truth

**Introduction to Combinators and Truth **

§1. **The basic language **

§2. **Operations I: general facts **

§3. **Operations II: elementary recursion theory **

§4A. **The Church-Rosser theorem **

§4B. **Term models **

§5. **The graph model **

§6. **An effective version of the extensional model D∞ Appendix **

This chapter contains an elementary introduction to combinatory logic. The topic is highly developed, but the chapter has quite a limited aim: that of yielding all the necessary prerequisites and making the book self-contained. According to the informal ideas outlined in the general introduction, we aim at investigating an axiomatic notion of abstract logical system, whose ground structure (*the abstract syntax*) is a combinatory algebra, extended with suitable built-in operations and with a primitive notion *N *of natural number. The choice of *N *is largely a matter of convenience and tradition; the basic constructions do not depend on the initial stock of built-in predicates and operations.

The central aim of this chapter is to clarify what we understand by ground structure. We begin in the axiomatic style and we describe a formal system OP for a type-free theory of operations; we then outline three basic semantic constructions. We underline that the basic constructions can be carried out in OP itself.

After the description of the formal language (**§1), we define OP and we discuss its general features (§2; closure under β-conversion, fixed point theorem, relation with λ-calculus), while §3 reviews some basic facts on recursion theory. We then present the term models of OP, which are based on the fundamental Church-Rosser theorem (§4). In §5 we give an elementary description of the Plotkin-Scott graph model Pω, together with its recursive submodel RE and Engeler’s construction. Finally, following an elegant procedure, due to Scott (1976, 1980), we show how to isolate an extensional submodel D∞ of RE. **

We describe an axiomatic theory of operations OP, which is a first-order extension of pure combinatory logic by simple number-theoretic notions. OP is proof-theoretically equivalent to *PA*, the elementary system of Peano arithmetic, and it will constitute the basis of all systems to be investigated in this book.

contains:

(i) countably many individual variables *x*1, *x*2, *x*3, …;

(ii) the logical constants ¬, ∧, ∀;

(iii) the individual constants *K *(constant function combinator), *S *(composition combinator), *SUC *(successor), *PRED *(predecessor), *PAIR *(ordered pair operation), *LEFT *(left projection), *RIGHT *(zero), *D *(definition by cases on numbers);

(iv) the binary function symbol *Ap *(application operation) and the predicate symbols *N *(natural numbers), *T *(truth), = (equality).

Terms are inductively defined from variables and constants via application of *Ap*. We use *x, y, z, u, v, w, f, g *as metavariables; while *t, t′, t″, s, s′, r, r′*, etc., are metavariables for terms. We write (*ts*) instead of *Ap*(*t, s*), and outer brackets are usually omitted, while the missing ones are restored by associating to the left; for instance, *xyz *stands for ((*xy*)*z*).

We adopt familiar shorthands for special terms: *t *+ 1 := *SUCt *(= the successor of *t*); 〈*t, s*〉 := *PAIRts *(= the ordered pair composed by *t *and *s*); (*t*)1 := *LEFTt *(= the left projection of *t*) and (*t*)2 := *RIGHTt *(= the right projection of *t*).

Formulas are inductively generated by means of the logical operations from atomic formulas (atoms, in short) of the form *t *= *s, Nt, Tt*.

*A, B, C *. As to the syntactical notions of free and bound variable, substitution, etc., we follow the standard conventions and terminology (**Shoenfield 1967). In particular, if E is an expression (term or formula), E(x) means that x may occur free in E, while E[x := t] stands for the result of substituting t for the free occurrences of x (provided t is substitutable for x in E). FV(E) is the set of free variables of the expression E; x ∈ FV(E) means that "x occurs free in E″. **

The remaining logical symbols are defined classically:

We stick to the usual convention that ¬ and quantifiers bind more than the remaining connectives, while ∨, ∧ bind more than → and ↔; sometimes dots are used in place of parentheses (see **§5 of the introduction). **

As usual, a numeral is any term obtained from the constant zero by means of a finite number of successor applications; if *n *stands for the *n*with *n *applications of *SUC*.

We now recall the standard definition of λ-abstraction in combinatory logic.

If *t *, λ*x.t *-term:

(i) λ*x.x *:= *SKK; *

(ii) λ*x.t *:= *Kt *if *x *∉ FV(*t*);

(iii) λ*x.*(*ts*) := *S*(λ*x.t*)(λ*x.s*), if *x *∈ FV(*ts*)

Of course λ*x.t *has exactly the same free variables as *t*, minus *x*. Coding of *n*-tuples can be obviously defined by iteration of

Close Dialog## Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

Loading