You are on page 1of 6

Model-theoretic semantics

is a formal account of the interpretations of legitimate expressions of a language. It is increasingly being used to provide Web markup languages with well-defined semantics. But a discussion of its roles and limitations for the Semantic Web has not yet received a coherent and detailed treatment. This paper takes the first steps towards such a treatment. The major result is an introductory explication of key ideas that are usually only implicit in existing accounts of semantics for the Web. References to more detailed accounts of these ideas are also provided. The benefit of this explication is increased awareness among Web users of some important issues inherent in using model-theoretic semantics for Web markup languages.

Proof-Theoretic Semantics
First published Wed Dec 5, 2012

Proof-theoretic semantics is an alternative to truth-condition semantics. It is based on the fundamental assumption that the central notion in terms of which meanings are assigned to certain expressions of our language, in particular to logical constants, is that of proof rather than truth. In this sense proof-theoretic semantics is semantics in terms of proof . Proof-theoretic semantics also means the semantics of proofs, i.e., the semantics of entities which describe how we arrive at certain assertions given certain assumptions. Both aspects of proof-theoretic semantics can be intertwined, i.e. the semantics of proofs is itself often given in terms of proofs. Proof-theoretic semantics has several roots, the most specific one being Gentzen's remarks that the introduction rules in his calculus of natural deduction define the meanings of logical constants, while the elimination rules can be obtained as a consequence of this definition (see section 2.2.1). More broadly, it belongs to what Prawitz called general proof theory (see section 1.1). Even more broadly, it is part of the tradition according to which the meaning of a term should be explained by reference to the way it is used in our language. Within philosophy, proof-theoretic semantics has mostly figured under the heading theory of meaning. This terminology follows Dummett, who claimed that the theory of meaning is the basis of theoretical philosophy, a view which he attributed to Frege. The term proof-theoretic semantics was proposed by Schroeder-Heister (1991; used already in 1987 lectures in Stockholm) in order not to leave the term semantics to denotationalism aloneafter all, semantics is the standard term for investigations dealing with the meaning of linguistic expressions. Furthermore, unlike theory of meaning, the term proof-theoretic semantics covers philosophical and technical aspects likewise. In 1999, the first conference with this title took place in Tbingen.

Truth Values
First published Tue Mar 30, 2010

Truth values have been put to quite different uses in philosophy and logic, being characterized, for example, as:

primitive abstract objects denoted by sentences in natural and formal languages, abstract entities hypostatized as the equivalence classes of sentences, what is aimed at in judgements, values indicating the degree of truth of sentences, entities that can be used to explain the vagueness of concepts, values that are preserved in valid inferences, values that convey information concerning a given proposition.

Depending on their particular use, truth values have been treated as unanalyzed, as defined, as unstructured, or as structured entities. The notion of a truth value has been explicitly introduced into logic and philosophy by Gottlob Fregefor the first time in Frege 1891, and most notably in his seminal paper (Frege 1892). Frege conceived this notion as a natural component of his language analysis where sentences, being saturated expressions, are interpreted as a special kind of names, which refer to (denote, designate, signify) a special kind of objects: truth values. Moreover, there are, according to Frege, only two such objects: the True(das Wahre) and the False (das Falsche): A sentence proper is a proper name, and its Bedeutung, if it has one, is a truth-value: the True or the False (Beaney 1997, 297). This new and revolutionary idea has had a far reaching and manifold impact on the development of modern logic. It provides the means to uniformly complete the formal apparatus of a functional analysis of language by generalizing the concept of a function and introducing a special kind of functions, namely propositional functions, or truth value functions, whose range of values consists of the set of truth values. Among the most typical representatives of propositional functions one finds predicate expressions and logical connectives. As a result, one obtains a powerful tool for a conclusive implementation of the extensionality principle (also called the principle of compositionality), according to which the meaning of a complex expression is uniquely determined by the meanings of its components. On this basis one can also

discriminate between extensional and intensional contexts and advance further to the conception of intensional logics. Moreover, the idea of truth values has induced a radical rethinking of some central issues in the philosophy of logic, including: the categorial status of truth, the theory of abstract objects, the subject-matter of logic and its ontological foundations, the concept of a logical system, the nature of logical notions, etc.

Tarski's Truth Definitions


First published Sat Nov 10, 2001; substantive revision Mon Aug 16, 2010

In 1933 the Polish logician Alfred Tarski published a paper in which he discussed the criteria that a definition of true sentence should meet, and gave examples of several such definitions for particular formal languages. In 1956 he and his colleague Robert Vaught published a revision of one of the 1933 truth definitions, to serve as a truth definition for model-theoretic languages. This entry will simply review the definitions and make no attempt to explore the implications of Tarski's work for semantics (natural language or programming languages) or for the philosophical study of truth. 1.1 Object language and metalanguage If the language under discussion (the object language) is L, then the definition should be given in another language known as the metalanguage, call it M. The metalanguage should contain a copy of the object language (so that anything one can say in L can be said in M too), and M should also be able to talk about the sentences of L and their syntax. Finally Tarski allowed M to contain notions from set theory, and a 1-ary predicate symbol True with the intended reading is a true sentence of L. The main purpose of the metalanguage was to formalise what was being said about the object language, and so Tarski also required that the metalanguage should carry with it a set of axioms expressing everything that one needs to assume for purposes of defining and justifying the truth definition. The truth definition itself was to be a definition of True in terms of the other expressions of the metalanguage. So the definition was to be in terms of syntax, set theory and the notions expressible in L, but not semantic notions like denote or mean (unless the object language happened to contain these notions). Tarski assumed, in the manner of his time, that the object language L and the metalanguage M would be languages of some kind of higher order logic. Today it is more usual to take some kind of informal set theory as one's metalanguage; this would affect a few details of Tarski's paper but not its main thrust. Also today it is usual to define syntax in set-theoretic terms, so that for example a string of letters becomes a sequence. In fact one must use a set-theoretic syntax if one wants to work with an object language that has uncountably many symbols, as model theorists have done freely for over half a century now.

1.2 Formal correctness The definition of True should be formally correct. This means that it should be a sentence of the form For all x, True(x) if and only if (x), where True never occurs in ; or failing this, that the definition should be provably equivalent to a sentence of this form. The equivalence must be provable using axioms of the metalanguage that don't contain True. Definitions of the kind displayed above are usually called explicit, though Tarski in 1933 called them normal.

Dag Prawitz (born 1936, Stockholm) is a Swedish philosopher and logician. He is best known for his work on proof theory and the foundations of natural deduction. Prawitz is a member of the Norwegian Academy of Science and Letters of the Royal Swedish Academy of Letters and Antiquity and the Royal Swedish Academy of Science.

First published Wed Apr 16, 2008

The development of proof theory can be naturally divided into: the prehistory of the notion of proof in ancient logic and mathematics; the discovery by Frege that mathematical proofs, and not only the propositions of mathematics, can (and should) be represented in a logical system; Hilbert's old axiomatic proof theory; Failure of the aims of Hilbert through Gdel's incompleteness theorems; Gentzen's creation of the two main types of logical systems of contemporary proof theory, natural deduction and sequent calculus (see the entry on automated reasoning); applications and extensions of natural deduction and sequent calculus, up to the computational interpretation of natural deduction and its connections with computer science.

1. Prehistory of the notion of proof


Proof theory can be described as the study of the general structure of mathematical proofs, and of arguments with demonstrative force as encountered in logic. The idea of such demonstrative arguments, i.e., ones the conclusion of which follows necessarily from the assumptions made, is central in Aristotle's Analytica Posteriora: a deductive science is organised around a number of basic concepts that are assumed understood without further explanation, and a number of basic truths or axioms that are seen as true immediately. Defined concepts and theorems are reduced to these two, the latter through proof. Aristotle's account of proof as demonstrative argument fits very well to the structure of ancient geometry as axiomatized in Euclid. The specific form of Aristotle's logic, the theory of syllogism has instead, so it seems, almost nothing to do with proofs in Euclidean geometry. These proofs remained intuitive for more than two thousand years. Before the work of Frege in 1879, no one seems to have maintained that there could be a complete set of principles of proof, in the sense expressed by Frege when he wrote that in his symbolic language, all that is necessary for a correct inference is expressed in full, but what is not necessary is generally not indicated; nothing is left to guesswork. (One might contend that Boole is an exception as far as classical propositional logic is concerned.) Even after Frege, logicians such as Peano kept formalizing the language of mathematical arguments, but without any explicit list of rules of proof. Frege's step ahead was decisive for the development of logic

and foundational study. The contrast to the ancients is great: Aristotle gave a pattern for combining arguments, but the idea of a finite closed set of rules was, philosophically, beyond the dreams of anyone before Frege, with the possible exception of Leibniz. As we know today, Frege's principles of proof are complete for classical predicate logic. Russell took up his logic, but used the notation of Peano, and thus formulated an axiomatic approach to logic. The idea was that the axioms express basic logical truths, and other logical truths are derived from these through modus ponens and universal generalization, the two principles Frege had identified. Mathematics was to be reduced to logic, so that its proofs would become presented in the same axiomatic pattern. Frege's and Russell's approach to logic became the universally accepted one, especially through the influence of Hilbert and his co-workers in the 1920s. In the 19th century, Frege was a marginal figure, and the algebraic approach to logic, as in Boole and especially Ernst Schrder, was the dominant one. It is clear that there was a good understanding of the principles of predicate logic in this tradition, for how could there have been a Lwenheim-Skolem theorem otherwise? Skolem found out about Frege's logic through Principia mathematica only after having worked out the theorem in his paper of 1920. The first section of that paper, widely read because of the English translation in Van Heijenoort's From Frege to Gdel, marks the end of the algebraic tradition of logic that merged silently with lattice theory. Other sections of the paper contain a remarkable beginning of the analysis of proofs in lattice theory and in projective geometry: Skolem considered the axioms of a mathematical theory from a purely combinatorial and formal point of view, as means for producing derivations of a formula from given formulas used as assumptions. It was found out in the early 1990s that the part on lattice theory contains the solution of a decision problem, called the word problem for freely generated lattices, the known solution of which stemmed from 1988! Skolem's terminology and notation in lattice theory are those of Schrder and that is part of the reason why his work was a lost opportunity for proof theory.

Jamie Torrevillas Bsca -III

You might also like