CONSTRUCTIVE REALISM
SYNTHESE LIBRARY
STUDIES IN EPISTEMOLOGY,
LOGIC, METHODOLOGY, AND PHILOSOPHY OF SCIENCE
Managing Editor:
VOLUME 287
FROM INSTRUMENTALISM
TO CONSTRUCTIVE REALISM
On Some Relations between Confirmation,
Empirical Progress, and Truth Approximation
by
A catalogue record for this book is available from the Library of Congress.
ISBN 9789048153695
ISBN 9789401716185 (eBook)
DOI 10.1007/9789401716185
TABLE OF CONTENTS
FOREWORD
ix
1
3
8
10
PART I: CONFIRMATION
CHAPTER 2:
2.1.
2.2.
2.3.
INTRODUCTION TO PART I
15
17
21
27
38
43
73
44
55
65
68
70
74
77
79
80
83
vi
TABLE OF CONTENTS
INTRODUCTION TO PART II
CHAPTER 5: SEPARATE EVALUATION OF THEORIES BY THE
HDMETHOD
5.1. HDevaluation of a theory
5.2. Falsifying general hypotheses, statistical test implications,
and complicating factors
CHAPTER 6: EMPIRICAL PROGRESS AND PSEUDOSCIENCE
6.1. Comparative HDevaluation of theories
6.2. Evaluation and falsification in the light of truth
approximation
6.3. Scientific and pseudoscientific dogmatism
91
93
95
103
111
111
120
126
137
CHAPTER 7:
7.1.
7.2.
7.3.
7.4.
7.5.
139
142
146
154
165
166
CHAPTER 8:
8.1.
8.2.
8.3.
173
173
190
198
208
209
219
228
236
TABLE OF CONTENTS
vii
INTRODUCTION TO PART IV
CHAPTER 10: REFINEMENT OF NOMIC TRUTH
APPROXIMATION
10.1. Structurelikeness
10.2. Refined nomic truthlikeness and truth approximation
10.3. Foundations of refined nomic truth approximation
10.4. Application: idealization & concretization
to.5. Stratified refined nomic truth approximation
243
245
246
249
262
268
272
278
299
317
317
320
322
324
325
327
329
330
331
NOTES
334
REFERENCES
347
INDEX OF NAMES
355
INDEX OF SUBJECTS
359
278
288
301
302
308
FOREWORD
Over the years, I have been working on two prima facie rather different, if not
opposing, research programs, notably Carnap's confirmation theory and
Popper's truth approximation theory. However, I have always felt that they
were compatible, even smoothly synthesizable, for all empirical scientists use
confirmation intuitions, and many of them have truth approximation ideas.
Gradually it occurred to me that the glue between confirmation and truth
approximation was the instrumentalist or evaluation methodology, rather than
the falsificationist one. By separate and comparative evaluation of theories in
terms of their successes and problems, hence even if already falsified, the
evaluation methodology provides in theory and practice the straight route for
shortterm empirical progress in science in the spirit of Laudan. Further analysis
showed that this sheds also new light on the longterm dynamics of science
and hence on the relation between the main epistemological positions, viz.,
instrumentalism, constructive empiricism, referential realism, and theory realism of a nonessentialist nature, here called constructive realism. Indeed, thanks
to the evaluation methodology, there are good, if not strong, reasons for all
three epistemological transitions "from instrumentalism to constructive
realism".
To be sure, the title of the book is ambiguous. In fact it covers (at least)
three interpretations. Firstly, the book gives an explication of the mentioned
and the intermediate epistemological positions. Secondly, it argues that the
successive transitions are plausible. Thirdly, it argues that this is largely due
to the instrumentalist rather than the falsificationist methodology. However,
to clearly distinguish between the instrumentalist methodology and the instrumentalist epistemological position, the former is here preferably called the
evaluation methodology. In the book there arises a clear picture of scientific
development, with a shortterm and a longterm dynamics. In the former there
is a severely restricted role for confirmation and falsification, the dominant role
is played by (the aim of) empirical progress, and there are serious prospects
for observational, referential and theoretical truth approximation. Moreover,
the longterm dynamics is enabled by (observational, referential and theoretical)
inductive jumps, after 'sufficient confirmation', providing the means to enlarge
the observational vocabulary in order to investigate new domains of reality.
This book presents the synthesis of many pieces that were published in various
journals and books. Material of the following earlier publications or publications to appear has been used, with the kind permission of the publishers:
IX
FOREWORD
FOREWORD
Xl
Jeffrey, Otto Kardaun, Erik Krabbe, David Miller, Ivo Molenaar, Hans Mooij,
Thomas Mormann, Ulises Moulines, Wim Nieuwpoort, Ilkka Niiniluoto,
Leszek Nowak, Graham Oddie, David Pearce, Karl Popper, Hans Radder,
Hans Rott, Willem Schaafsma, Gerhard Schurz, Abner Shimony, Brian Skyrms,
Wolfgang Spohn, Peter Urbach, Nicolien Wierenga, Andrzej Wisniewski, Sandy
Zabell, Gerhard Zoubek, and Jan Zytkow.
The opportunity to write this book was provided by three important factors.
First, I got a sabbatical year (96/97) from my home university, the University
of Groningen, the Netherlands. Second, I was, after fourteen years, again invited
as a fellow for a year at the Dutch workparadise for scholars, the Netherlands
Institute for Advanced Study of the Royal Dutch Academy of Sciences (NIAS,
in Wassenaar, near Leiden). The support of the staff was in various respects
very pleasant and efficient. I am especially grateful to Anne Simpson and Jane
Colling for editing my English. Third, and finally, three colleagues and friends
were willing to read and comment upon the entire manuscript: Roberto Festa,
Ilkka Niiniluoto, Andrzej Wisniewski, and some others were willing to comment
upon one or more chapters: Kenneth Gemes and Hans Radder. Of course, the
responsibility for any shortcomings is mine.
Theo A.F. Kuipers,
September 1999,
Groningen, The Netherlands
GENERAL INTRODUCTION:
EPISTEMOLOGICAL POSITIONS
Introduction
We will start by sketching in a systematic order the most important epistemological positions in the instrumentalismrealism debate, viz., instrumentalism,
constructive empiricism, referential realism and theory realism. They will be
ordered according to their answers to a number of leading questions, where
every next question presupposes an affirmative answer to the foregoing one.
The survey is restricted to the investigation of the natural world and hence to
the natural sciences. It should be stressed that several complications arise if
one wants to take the social and cultural world into account. However, the
present survey may well function as a point of departure for discussing epistemological positions in the social sciences and the humanities.
We will not only include the different answers to the standard questions
concerning true and false claims about the actual world, but also the most
plausible answers to questions concerning claims about the nomic world, that
is, about what is possible in the natural world. Moreover, we will include
answers to questions about 'actual' and 'nomic' truth approximation. At the
end we will give an indication of the implications of the results of the present
study of confirmation, empirical progress and truth approximation for the way
the epistemological positions are related. The main conclusions are the
following: there are good reasons for the instrumentalist to become a constructive empiricist; in his turn, in order to give deeper explanations of success
differences, the constructive empiricist is forced to become a referential realist;
in his turn, there are good reasons for the referential realist to become a theory
realist of a nonessentialist nature, here called a constructive realist.
1.1. FOUR PERSPECTIVES ON THEORIES
Actual
Nomic
Standard/traditional
Peirce/ Niiniluoto
Giere
Popper
Focus
Truthvalue
Truth approximation
Taking the variety of (versions of) positions seriously, implies that the notions
of 'true' and 'false' are assumed to have adapted specifications, that is, 'true'
and 'false' may refer to the actual or the nomic world and may concern
observationally, referentially, or theoretically true and false, respectively. The
same holds for the notion of 'the truth', but it should be stressed in advance
that it will always be specified in a domainandvocabulary relative way. Hence,
no language independent metaphysical or essentialist notion of 'THE TRUTH'
is assumed. With this in mind, let us start with the successive questions.
The first question really is not an epistemological question, but a preliminary
ontological question.
Question 0: Does a natural world that is independent of human
beings exist?
proper theories into observation theories, such as the atomic theory of matter
became for nuclear physics, has then to be conceived as merely a way of
speaking, giving rise to other kinds of asifbehavior. A positive answer to the
present question amounts to socalled scientific realism, according to which
proper theories, or at least theoretical terms, have to be taken seriously. A
negative answer might be said to reflect observational realism or just empiricism.
As a matter of fact, there are two wellknown types of the negative answer
to Q2. According to the first type, usually called instrumentalism, talking about
reference of theoretical terms does not make sense, let alone talking about true
or false (proper) theories. This way of talking reflects according to the instrumentalist a kind of category mistake by mistakenly extrapolating meaningful
terminology for the observational level to the theoretical level. The only function of proper theories is to provide good derivation instruments, that is, they
need to enable the derivation of as many true observational consequences as
possible and as few false observational consequences as possible. Hence, the
ultimate aim of the instrumentalist is the best derivation instrument, if any.
Wellknown representatives of the instrumentalist position among philosophers
are Schlick (1938) and Toulmin (1953). Although Laudan (1977,1981) admits
that theories have truthvalues, he is frequently called an instrumentalist
because of his methodology, according to which theories are not disqualified
by falsification as long as they cannot be replaced by better ones. Moreover,
although debatable, the physicist Bohr is a reputed instrumentalist, at least as
far as quantum mechanics is concerned. Notice that it is plausible to make the
distinction between an actual and a nomic version of instrumentalism depending on whether the relevant true and false observational consequences all
pertain to the actual world or at least some to the nomic world.
According to the second type of negative answer to Q2, called (constructive)
empiricism by its inventor and main proponent Van Fraassen (1980, 1989),
there is no category mistake, that is, the point is not whether or not theoretical
terms can refer and whether proper theories can be true or false. In fact, such
terms mayor may not refer and such theories are true or false, but the problem
is that we will never know this beyond reasonable doubt. Hence, what counts is
whether such theories are empirically adequate or inadequate or, to use our
favorite terminology, whether they are observationally true or false. Again
there are two versions, actual and nomic empiricism, depending on whether
the theories are supposed to deal all with the actual world or at least some
with the nomic world. Although Van Fraassen is clear about his nonnomic
intentions, the nomic analogue has some plausibility of its own. That is, it
makes perfect sense to leave room for observational dispositions, without taking
theoretical terms of other kinds seriously. In other words, if one conceives
dispositions in general, hence including observational dispositions, as theoretical terms, one may well reserve a special status for observational dispositions.
In both versions, it makes sense to talk about the observational truth in the
sense of the strongest true observational hypothesis about a certain domain of
the natural world within a certain vocabulary. Assuming that it is also possible
to make sense of the idea that (the observational theory following from) one
theory is closer to the observational truth than another, even convergence to
the observational truth is possible. As suggested, Van Fraassen is a strong
defender of empiricism in the traditional, that is, actualist and truthvalue
oriented, perspective, where it should be remarked that his attitude is strongly
influenced by his interest in quantum mechanics. Although Van Fraassen
extrapolates this attitude to other proper theories, there are also scientists, in
fact there are many, who take advantage of the fact that there may be examples
of proper theories towards which an empiricist attitude is the best defensible
one, whereas there are other examples towards which a realist attitude is the
best defensible one. This gives rise to what Dorling (1992) has aptly called
local positivist versus realist disputes, as opposed to the global dispute about
whether it is a matter of yes or no for all proper theories at the same time. In
this respect the empiricist attitude is usually identified with a position in the
global dispute, the realist positions that follow usually leave room for local
empiricist deviations from the globally realist attitude, as a kind of default
heuristic rule.
As remarked already, for both types of empiricists, the longterm dynamics
in science, according to which proper theories transform into observation
theories, has to be seen as an asif way of speaking. The question even arises
whether this is really a coherent way of deviating from scientific practice where
it seems totally accepted that the concept of observation is stretched to the
suggested theoryladen interpretation.
Hence, it is time to turn to the positive answer to Q2, that is, to the position
called scientific realism. Since the books by Hacking (1983) and Cartwright
(1983) there is a weaker version of realism than the traditional one, which is
suggested by the next question.
world. In a sense this is just a definition of (existence in) the nomic world, as
encompassing the actual world. Moreover, in both versions it is possible that
one theory is observationally and referentially closer to the truth than another,
as soon as we assume, in addition to the previous assumptions for observational
truth approximation, that it is possible to define the idea that (the total
referential claim of) one theory can be closer to the referential truth than
another. Here, the referential truth is of course the strongest true referential
claim which can be made within a certain vocabulary about a certain domain.
However, since referentialists do not want to take theoretical induction seriously,
that is, deciding to further assume that a certain proper theory is true (see
further below), the transformation of proper theories into observation theories
is for them no more open than for empiricists, i.e., it is open only in some asifreading. Referential realism seems, however, more difficult to defend than
constructive empiricism, in particular when one takes the possibility of truth
approximation into account. That is, as long as one is only willing to think in
terms of true and false claims about theoretical terms when they are supposed
to refer, one may be inclined to hold that most of these claims, past and future
ones, are false. However, as soon as one conceives of sequences of such claims
that may approach the truth, it is hardly understandable that the truth would
not be a worthwhile target, at least in principle. Hence, let us turn to the
suggested stronger position.
The positive answer to Q3 brings us to socalled theoretical or theory realism,
in some version or another advocated by, for instance, Peirce (1934), Popper
(1963), and Niiniluoto (1987a)3. Theory realism shares with referential realism
the claim that theoretical terms are supposed to refer, and that, from time to
time, we have good reasons to assume that they refer, including the corresponding truth approximation claims. It adds to this the claim that theories are
claimed to be true, and that we have from time to time good reasons to further
assume that they are true, that is, to carry out a theoretical induction. Moreover,
proper theories can converge to the theoretical truth, that is, the strongest true
claim that can be made, in a given vocabulary, about a specific domain, again
leaving room for an actual and a nomic version. Although the truth to be
approached is again domainandvocabulary relative, this does not exclude, of
course, the possibility of comparison and translation of theories. Moreover,
theoretical induction is always a matter for the time being, a kind of temporal
default rule: as long as there is no counterevidence, it is assumed to be true.
This defaultassumption not only implies that the theoretical terms of the
theory then are assumed to refer, but also that the proper theory can from
then on be used as an observation theory. Hence, the transformation process
and the corresponding longterm dynamics are possible.
The last question to be considered is the following:
Question 4: Does there exist a correct or ideal conceptualization of
the natural world?
no
~
no
theory realism
ontological idealism
epistemological relativism
'experiential skepticism
'inductive skepticism
empiricism (observational realism)
, instrumentalism
scientific realism
, constructive empiricism
>
epistemological realism
no
ontological realism
c:=::::::=>
no
no
:>
>
referential realism
=> entity realism
constructive realism
essentialistic realism
progress and truth approximation in the rest of this book. The main results
will come available at the end of Chapter 2, 6, 9, and finally in Chapter 13. A
brief indication of the results is the following.
There are good reasons for the instrumentalist to become a constructive
empiricist; in his turn, in order to give deeper explanations of success differences,
the constructive empiricist is forced to become a referential realist; in his turn,
there are good reasons for the referential realist to become a theory realist.
The theory realist has good reasons to choose for constructive realism, since
there is no reason to assume that there are essences in the world. Notice that
the road to constructive realism amounts to a pragmatic argumentation for
this position, where the good reasons will mainly deal with the shortterm and
the longterm dynamics generated by the nature of, and the relations between,
confirmation, empirical progress and truth approximation.
Besides these epistemological conclusions, there are some general methodological lessons to be drawn. There will appear to be good reasons for all positions
not to use the falsificationist but the instrumentalist or 'evaluation(ist), methodology. That is, the selection of theories should exclusively be guided by more
empirical success, even if the better theory has already been falsified. Hence, the
methodological role of falsifications will be strongly relativized. This does not at
all imply that we dispute Popper's claim that aiming at falsifiable theories is
characteristic for empirical science, on the contrary, only falsifiable theories can
obtain empirical success. Moreover, instead of denouncing the hypotheticodeductive method, the evaluation methodology amounts to a sophisticated application
10
of that method. As suggested, the evaluation methodology may also be called the
instrumentalist methodology, because the suggested methodology is usually associated with the instrumentalist epistemological position. The reason is, of course,
that from that position it is quite natural not to consider a theory as seriously
disqualified by mere falsification. However, since we will argue that that methodology is also very useful for the other positions, we want to terminologically separate
the instrumentalist methodology from the instrumentalist epistemological position, by calling the former the evaluation methodology, enabling to identify 'instrumentalism' with the latter.
We close this section with a warning. The suggested hierarchy of the heuristics
corresponding to the epistemological positions is, of course, not to be taken in
a dogmatic sense. That is, when one is unable to successfully use the constructive
realist heuristic, one should not stick to it, but try weaker heuristics, hence first
the referential realist, then the empiricist, and finally the instrumentalist heuristic. For, as with other kinds of heuristics, although not everything goes always,
pace (the suggestion of) Feyerabend, everything goes sometimes. Moreover,
after using a weaker heuristic, a stronger heuristic may become applicable at
a later stage: "reculer pour mieux sauter".
11
and falsification of hypotheses, together with disconfirmation and verification (the 'deductive confirmation matrix' and the 'confirmation
square', respectively).
 Systems of inductive probability, representing inductive confirmation,
among which optimum systems, systems with analogy and systems with
nonzero probability for universal statements.
In Part II, entitled "Empirical progress":
 'The evaluation report' of a theory in terms of general successes and
(individual) counterexamples, and a systematic survey of the factors
complicating theory testing and evaluation.
 The nature of comparative theory evaluation, giving rise to the 'rule of
success', characterizing 'empirical progress', prescribing to select the
theory, if any, that has so far proven to be the most successful one.
 The symmetric 'evaluation matrix' for the comparison of the relative
success of two theories.
In Part III, entitled "Basic truth approximation":
 The, domain and vocabulary relative, definitions of 'the actual truth'
and 'the nomic truth', the basic definitions of 'closer to the nomic truth'
or 'nomically more truthlike', the success theorem, according to which
'more truthlikeness' guarantees 'more successfulness'. With the consequence that scientists following the rule of success behave functionally
for truth approximation, whether they like it or not.
 The detailed reconstructions of intuitions of scientists and philosophers
about truth approximation, the correspondence theory of truth, and
dialectical patterns.
 The epistemological stratification of these results, including definitions
of reference, the referential truth, and referential truthlikeness, enabling
the detailed evaluation of the three transitions from instrumentalism to
constructive realism.
In Part IV, entitled "Refined truth approximation":
 The basic definitions do not take into account that theories are frequently improved by replacing mistaken models by other mistaken ones.
Roughly the same patterns arise as in Part III by refined definitions
that take this phenomenon into account.
 Specifically idealization and concretization become in this way functional for (potential) truth approximation, which will be illustrated by
three examples, two illustrating the specific pattern of 'double concretization', viz., the law of Van der Waals and capital structure theory, and
one illustrating the pattern of 'specification followed by concretization',
viz., the old quantum theory.
The book concludes with a sketch of the resulting favorite epistemological
position, viz., constructive realism, which may be seen as a background pattern
12
of thinking, guiding many, if not most, scientists when practicing the instrumentalist methodology.
This brings us to the usevalue of cognitive structures. They always concern
informative patterns, which seem useful in one way or another. We briefly
indicate five kinds of possible usevalue, with some particular references to
structures disentangled in this book
(a) they may provide the 'null hypothesis of ideal courses of events', which
can playa guiding role in social studies of science; e.g., the cognitive
justification of nonfalsificationist behavior, following from the nature
of theory evaluation (Section 6.3.).
(b) they may clarify or even solve classical problems belonging to abstract
philosophy of science; e.g., the intralevel explication of the correspondence theory of truth (Section 8.2.).
(c) they may be useful as didactic instruments for writing advanced textbooks, leading to better understanding and remembrance; e.g., the
comparative evaluation matrix (Section 6.1.).
(d) they may playa heuristic role in research policy and even in science
policy; e.g., confirmation analysis suggests redirection of drugs testing
(Section 2.2.).
(e) last but not least, they may playa heuristic role in actual research; e.g.,
when explanatorily more successful, the theory realist may have good
reasons to hypothesize theoretical truth approximation despite some
extra observational problems (Chapter 9).5
A final remark about the technical means that will be used. Besides some
elementary logic and set theory, some of the main ideas of the structuralist
approach of theories will be used. Where relevant, starting in Part III, what is
strictly necessary will be explained. However, for detailed expositions the reader
is referred to (Balzer 1982) and (Balzer, Moulines and Sneed 1987). For a
general survey the reader may consult (Kuipers 1994), also to appear in
(Kuipers SiS).
PART I
CONFIRMATION
INTRODUCTION TO PART I
16
INTRODUCTION TO PART I
Deductive and nondeductive confirmation mayor may not have extrapolating or inductive features. The last chapter of this part (Chapter 4) deals with
the program of inductive confirmation or inductive logic, set up by Carnap
and significantly extended by Hintikka. It is a special explication program
within the Bayesian program of confirmation theory. It is governed by the idea
of extrapolating or learning from experience in a rational way, to be expressed
in terms of 'inductive probabilities' as opposed to 'noninductive probabilities'.
In later chapters the role of falsification and confirmation will be relativized
in many respects. However, it will also become clear that they remain very
important for particular types of hypotheses, notably, for general observational
(conditional) hypotheses, and for several kinds of (testable) comparative
hypotheses, e.g., hypotheses claiming that one theory is more successful or
(observational, referential and theoretical) truthlike than another.
A recurring theme in this part, and the other ones, will be the localization
and comparison of the main standard epistemological positions, as they have
been presented in Chapter 1, viz., instrumentalism, constructive empiricism,
referential realism, and constructive (theory) realism.
2
CONFIRMATION BY THE HDMETHOD
Introduction
According to the leading expositions of the hypotheticodeductive
(HD)method by Hempel (1966), Popper (1934/1959) and De Groot
(1961/1969), the aim of the HDmethod is to answer the question whether a
hypothesis is true or false, that is, it is a method of testing. On closer inspection,
this formulation of the aim of the HDmethod is not only laden with the
epistemological assumption of theory realism according to which it generally
makes sense to aim at true hypotheses, but it also mentions only one of the
realist aims. One other aim of the HDmethod for the realist, an essentially
more refined aim, is to answer the question of which facts are explained by the
hypothesis and which facts are in conflict with it. In the line of this second
aim, we will show in Part II that the HDmethod can also be used to evaluate
theories, separately and in comparison with other theories, among others, in
terms of their general successes and individual problems, where theories are
conceived as hypotheses of a strong kind. In Part III and IV we will argue
that the methodology of (comparative) HDevaluation of theories is even
functional for truth approximation, the ultimate aim of the realist.
For the moment we will restrict attention to the HDmethod as a method
of testing hypotheses. Though the realist has a clear aim with HDtesting, this
does not mean that HDtesting is only useful from that epistemological point
of view. Let us briefly review in the present connection the main other epistemological positions that were described in Chapter 1. Hypotheses mayor may
not use socalled 'theoretical terms', besides socalled 'observation terms'. What
is observational is not taken in some absolute, theoryfree sense, but depends
greatly on the level of theoretical sophistication. Theoretical terms intended to
refer to something in the nomic world mayor may not in fact refer. For the
(constructive) empiricist the aim of HDtesting is to find out whether the
hypothesis is observationally true, i.e., has only true observational consequences, or is observationally or empirically adequate, to use Van Fraassen's
favorite expression. For the instrumentalist the aim of HDtesting is still more
liberal: is the hypothesis observationally true for all intended applications? The
referential realist, on the other hand, adds to the aim of the empiricist to find
out whether the hypothesis is referentially true, i.e., whether its referential
claims are correct. In contrast to the theory realist, he is not interested in the
17
18
question whether the theoretical claims, i.e., the claims using theoretical terms,
are true as well. Recall that claims may pertain to the actual world or to the
nomic world (of physical possibilities). However, this distinction will not play
an important role in the first part of this book.
Methodologies are ways of answering epistemological questions. It will turn
out that the method of HDtesting, the test methodology, is functional for
answering the truth question of all four epistemological positions. For this
reason, we will present the test methodology in fairly neutral terms, viz.,
plausibility, confirmation and falsification.
The expression 'the plausibility of a hypothesis' abbreviates the informal
qualification 'the plausibility, in the light of the background beliefs and the
evidence, that the hypothesis is true', where 'true' may be specified in one of
the four main senses: (1) observationally as far as the intended applications
are concerned, (2) observationally, in all possible respects, (3) and, moreover,
referentially, (4) and, even, theoretically. Admittedly, despite these possible
qualifications, the notion of 'plausibility' remains necessarily vague, but that is
what most scientists would be willing to subscribe to. 1 At the end of this
chapter we will further qualify the exposition for the four epistemological
positions when discussing the acceptance of hypotheses. When talking about
'the plausibility of certain evidence', we mean, of course, 'the (prior) plausibility
of the (observational) hypothesis that the test will result in the reported outcome'. Hence, here 'observationally true' and 'true' coincide by definition of
what can be considered as evidential statements.
Regarding the notions of 'confirmation' and 'falsification' the situation is
rather asymmetric. 'Falsification' of a hypothesis simply means that the evidence
entails that the hypothesis is observationally false, and hence also false in the
stronger senses. However, what 'confirmation' of a hypothesis precisely means,
is not so clear. The explication of the notion of 'confirmation' of a hypothesis
by certain evidence in terms of plausibility will be the main target of this and
the following chapter. It will be approached from the success perspective on
confirmation, equating confirmation with an increase of the plausibility of the
evidence on the basis of the hypothesis, and implying that the plausibility of
the hypothesis is increased by the evidence.
The variety of empirical hypotheses is large. To stress this we mention a
number of examples:
 Mozart was poisoned by Salieri,
 Dutch is more similar to English than to German,
 When people have bought something they have selective attention for
information justifying their choice,
 People tend to choose that action which maximizes their expected utility,
 The function of lungs is to supply oxygen to the organism,
 The average rainfall per year gradually increased in the 20th century,
 The universe originated from the big bang,
19
HI=(C+F)
H,C
Scheme 2.1.1.
Scheme 2.1.2.
20
implication is false, the hypothesis must be false, it has been falsified, for the
following arguments are deductively valid:
HFI
,1
HF(C+F)
C, ,F
,H
Scheme 2.2.1.
Scheme 2.2.2.
When the test implication turns out to be true, the hypothesis has of course
not been (conclusively) verified, for the following arguments are invalid, indicated by '/ /':
HFI
I
HF(C+F)
C,F
 j j
 j j j j j
H?
H?
Scheme 2.3.).
Scheme 2.3.2.
21
qualitative ideas about confirmation (or corroboration, to use his favorite term)
are basically in agreement with the Bayesian approach.
However, there is at least one influential author about confirmation, viz.,
Glymour, whose ideas do not seem to be compatible with the qualitative theory
to be presented. In Note 11 to Subsection 2.2.3. we will briefly discuss Glymour's
project, and question it as a project of explication of the concept of confirmation, rather than of theoretical measurement, which it surely is.
Guided by the success perspective on confirmation, Section 2.1. gives an
encompassing qualitative theory of deductive confirmation by adding a comparative supplement to the classificatory basis of deductive confirmation. The
comparative supplement consists of two principles. In Section 2.2. several
classical problems of confirmation are dealt with. First, it is shown that the
theory has a plausible solution of Hempel's raven paradoxes, roughly, but not
in detail, in agreement with an influential Bayesian account. Second, it is shown
that the theory has a plausible, and instructive, solution of Goodman's problem
with 'grue' emeralds. Third, it is argued that further arguments against 'deductive confirmation' do not apply when conditional deductive confirmation is
also taken into account. Section 2.3. concludes with some remarks about the
problem of acceptance of wellconfirmed hypotheses.
2.1. A QUALITATIVE THEORY OF DEDUCTIVE CONFIRMATION
Introduction
22
'H
E (true)
'E (false)
HI=E ('E/='H)
H/='E (EI='H)
Deductive Confirmation
Falsification
DC(E,H)
F(E,H)
DD(E,H)
V(E,H)
DC(E,H; C)
23
whereas C mayor may not be entailed by E. Hence, in view of the fact that
hypotheses and initial conditions usually are logically independent, successful
HDprediction and ONexplanation of individual events form the paradigm
cases of cdconfirmation, for they report 'conditional deductive successes'.2
When dealing with specific examples, several expressions related to
cdconfirmation will be used with the following interpretation:
SDC:
RP P:
PS:
Principle of symmetry
H makes E more plausible iff E makes H more plausible
Note, moreover, that the combination of the first two principles makes
comparative expressions of the form "E* confirms H* more than E confirms
24
pes:
For our purposes, the three principles SOC, RPP (together implying PS) and
PCS of general confirmation are sufficient. From now on in this chapter,
comparative confirmation claims will always pertain to (conditional) deductive
success and (conditional) deductive confirmation.
Now we are able to propose two comparative principles concerning
dconfirmation, one (P.l) for comparing two different pieces of evidence with
respect to the same hypothesis, and one (P.2) for comparing two different
hypotheses in the light of the same piece of evidence.
P.l
P.2
To be sure, P.l and P.2 are rather vague, but we will see that they have
some plausible applications, called the special principles. Hence, if one finds
the general principles too vague, it is suggested that one primarily judges the
special principles.
The motivation of the plausibility principles is twofold. First, our claim is
that P.l and P.2 are roughly in agreement with scientific common sense
concerning confirmation by successes obtained from HOtests. For P.l this is
obvious: less expected evidence has more 'confirmation value' than more
expected evidence. P.2 amounts to the claim that hypotheses should be equally
praised for the same success, which is pretty much in agreement with (at least
one version of) scientific common sense.
The second motivation of the principles P.I and P.2 will be postponed to
the next chapter, where we will show that they result, with some qualifications,
from certain quantitative confirmation considerations of a Bayesian nature
applied to HOtests, Bayesian considerations for short, when 'more plausible'
is equated with 'higher probability'.
P.2 amounts to the claim that the 'amount of confirmation', more specifically,
the increase of plausibility by evidence E is independent of differences in initial
plausibility between Hand H* in the light of the background beliefs. In the
25
next chapter we will also deal with some quantitative theories of confirmation
for which holds that a more probable hypothesis profits more than a less
probable one. This may be seen as a methodological version of the socalled
Mattheweffect, according to which the rich profit more than the poor. It is
important to note, however, that P.2 does not deny that the resulting posterior
plausibility of a more plausible hypothesis is higher than that of a less plausible
one, but the increase does not depend on the initial plausibility. For this reason,
P.2 and the resulting theory are called neutral with respect to equally successful
hypotheses or, simply, pure4 , whereas theories having the Mattheweffect, or
the reverse effect, are called impure. The first type of impure theories are said
to favor plausible hypotheses, whereas theories of the second type favor implausible hypotheses. In Subsection 2.2.1 . of the next chapter, we will give an urnmodel, hence objective probabilistic, illustration and defence of P.2.
The condition 'in the light of the background beliefs' in P.l is in line with
the modern view (e.g., Sober 1988, p. 60, Sober 1995, p. 200) that confirmation
is a three place relation between evidence, hypothesis and background beliefs,
since without the latter it is frequently impossible to make differences between
the strength of confirmation claims. As a matter of fact, it would have been
better to include the background beliefs explicitly in all formal representations,
but we have refrained from doing so to make reading the formulas more easy.
This does not mean that background beliefs always playa role. For instance,
they do not playa role in the following two applications of the principles, that
is, the first two special principles (for nonequivalent E and E*):
S.l
S.2
26
Edconfirms H then
Edconfirms H&H', for any H' compatible with H
even as much as H (due to S.2)
but E does not at all necessarily dconfirm H'
hence, the dconfirmation remains perfectly localizable
P.lc
S.lc
P.2c
S.2c
27
Later we will introduce two other applications of the principles, more specifically of P.lc and P.2c, or, if you want, new special principles. They will provide
the copingstones for the solution of the raven paradoxes and the grue problem.
In contrast to S.l and S.2, they presuppose background beliefs and concern
conditional (deductive) confirmation. Moreover, they deal with special types of
hypotheses, pieces of evidence and conditions. They will be indicated by
S#.lc(ravens) and SQ.2c(emeralds), respectively.
2.2. RAVENS, GRUE EMERALDS, AND OTHER PROBLEMS AND
SOLUTIONS
Introduction
In this section it will be shown how the qualitative theory of deductive confirmation resolves the famous paradoxes of confirmation presented by Hempel and
Goodman. Moreover, we will deal with the main types of criticism in the
literature against the purely classificatory theory of deductive confirmation,
that is, when a comparative supplement is absent.
In view of the length of the, relatively isolated, subsection (2.2.2.) about
Goodman's grue problem, the reader may well decide to skip that subsection
in first reading.
28
the desired result, i.e., (4), immediately follows from the third special principle,
viz., the following general application of P.lc. Though the symbols are suggested
by the raven example (e.g., an RB may represent a black raven, i.e., a raven
which is black), they can get any other interpretation. Let #R indicate the
number of R's, etc.
S#.lc(ravens):
S#.lc realizes in a precise sense the intuition that a black raven confirms "all
ravens are black" (much) more than a nonblack nonraven. That S#.lc is an
application of P.lc can be shown as follows. If #R < #8 and "all Rare B" is
false, then the percentage of RB's among the R's is lower than the percentage
of R8's among the 8's, hence hitting among the R's at an R which is B is less
plausible (hence more surprising) than hitting among the 8's at a 8 which is
R. For example, even in the extreme case of just one nonblack raven, the first
percentage is #R/(#R + 1), which is indeed smaller than the second percentage,
#8/(#8 + 1), if and only if #R is smaller than #8. In other words, the higher
the percentage of individuals with a certain trait in a population, the more it
is (to be) expected that it applies to an arbitrary member, where only the
comparison of percentages is relevant, and not the trait nor the (size of the)
population. 5 Hence, SH.lc can be motivated by referring directly to scientific
common sense, but also indirectly by showing later that it results from Bayesian
considerations when the evidence is assumed to be obtained by random sampling in the relevant universe. Of course, we speak of much more confirmation
in SH.lc when the background beliefs imply that #R is much smaller than #8
(#R #8), as in the case of Aravens, and hence, if RH is false, the percentage
29
of RB's among the R's is (relatively speaking) much lower than the percentage
of RB's among the R's. Precisely because typical applications of S#.lc concern
such cases, it is defensible to call it a qualitative application and principle,
despite its explicit reference to numbers of individuals and the reference to
percentages in the motivation.6
In sum, cdconfirmation solves the raven paradox concerning black nonravens by (3) and the one concerning nonblack nonravens by (4), which is
guaranteed by applying P.lc, or its application S#.lcravens, to the background
assumption (Aravens) that the number of ravens is much smaller than the
number of nonblack objects.
There remains the question of what to think of Hempel's principles used to
derive the paradoxes of confirmation. It is clear that the equivalence condition
was not the problem, but Nicod's criterion that a black raven confirms RH
unconditionally. Whereas Nicod's criterion is usually renounced unconditionally, we may conclude that it is (only) right in a sophisticated sense: a black
raven is a case of cdconfirmation, viz., on the condition of being a raven.
2.2.2. Grue emeralds
The qualitative theory of deductive confirmation generates an instructive analysis of the other famous riddle of confirmation, i.e., the problem with 'grue'
emeralds, discovered by Goodman (1955). This problem is also called the grue
'paradox', for the same reason as one speaks about the raven paradoxes. They
both concern counterintuitive consequences of certain principles of
confirmation.
The problem with grue emeralds is that a green emerald found before the
year 3000 seems to confirm not only the hypothesis that all emeralds are green
but also that all emeralds are 'grue', where grue is defined as the following
queer predicate: green if examined before 3000, and blue if not examined before
3000. Goodman's generally accepted account roughly is as follows: the predicate
'grue' is not weBentrenched in predictively successful scientific generalizations,
hence, as it stands, it is below the mark of scientific respectability to be used
in generalizations that can be confirmed, i.e., to use Goodman's other favorite
expression, the 'gruehypothesis' is not (yet) projectible. We will give a related,
but more detailed diagnosis of the problematic aspect. It may well be conceived
as a formal explication and justification of Goodman's informal account. It
can best be presented by using from time to time a formally similar, but less
queer, definition of 'grue', which is only applicable to living beings, say, eagles:
'being male and green, or being female and blue'. This wiB be called the 'gender'
reading, as opposed to the former 'temporal' reading.
Recall that we use the abbreviation: " .. . Cconfirms ... " as a shorthand for
"... cdconfirms ... on the condition (indicated by) C". We add the abbreviations:
E: emerald/eagle (in this subsection not to be confused with 'evidence'), M:
(known to be) examined before 3000/male, M: not (known to be) examined
before 3OOO/female, G: green, B: blue, and Q: grue (queer), i.e., MG or MB. G
30
and B are supposed to be mutually exclusive, but they are not supposed to be
exhaustive.
We will first specify in detail to what extent 'green' and 'grue' are similar,
and show that additional assumptions are needed to create the intuitively
desirable asymmetry. More specifically, it will be shown that not only a strong,
but also a weak irrelevance assumption is suitable for this purpose. Both are
in line with the entrenchment analysis of Goodman.
The basic intuition
The basic intuition of Goodman is, of course, that, though a green emerald
investigated before 3000 confirms 'the green hypothesis' ("all E are G"), it does
not confirm 'the grue hypothesis' ("all E are Q"). It postulates an asymmetry
in confirmation behavior between 'green' and 'grue'. However, from the unconditional version of Nicod's criterion, 'Nicodconfirmation', we not only get
(1) an EMG Nicodconfirms "all E are G"
31
the wrong impression that it implies (6*) and the intuition that it would indeed
be absurd if (6*) were to obtain. In other words, we take the invalidity of (6*)
as the proper interpretation of the confirmationrejecting side of the basic
intuition, instead of the originally, but wrongly, suggested invalidity of (5).
It is interesting to see in more detail how (5), (6*), and (3) are related. Note
that the following equivalence holds:
(7) "all E are Q".;;. "all EM are G" & "all EM are B"
(5) an EMG EMconfirms "all EM are G" & "all EM are B"
Now it is easy to see that (3), (5) and (6*) form an illustration of the conditional
version of the 'proper conjunction connotation' of Section 2.l. That is, whereas
cdconfirmation according to (3) transmits to a conjunction with some other
hypothesis according to (the equivalent version of) (5), and the latter confirmation is, according to S.2c, even as much as the former, no confirmation
transmits to the added conjunctive hypothesis, for (6*) is invalid.
The question why (6*) would be absurd if valid, however, remains interesting.
Is it, prima facie in line with Goodman's entrenchment considerations, because
it amounts to a surprising prediction across some clearly defined border (a
year, gender), breaking the continuity of nature? In this case, it would be
plausible to expect that the additional hypothesis suggested by continuity
considerations, viz., "all EM are G", is EMconfirmed by an EMG, since G is
wellentrenched, and the trouble with (5) would merely be caused by the queer,
nonentrenched character of Q. Or is it because the 'grueinduced' additional
hypothesis "all EM are B" reaches incautiously over a border that might be a
relevant distinction? In this case also the 'greeninduced' additional hypothesis
should not be EMconfirmed by an EMG, and the usual, but wrong, assumption
that this is implied by (4), is brought to light by the queer predicate.
It is easy to check that the second option is the proper answer from the
perspective of cdconfirmation. That is,
(8*) an EMG EMconfirms "all EM are G"
is invalid, for the same reason as (6*): the antecedence EM of the hypothesis
'cannot be put to work' by the condition EM to derive G and B, respectively.
This suggests that (8*) can also provide an illustration of the proper conjunction
connotation. Note, for this purpose, that the following equivalence obtains:
(9) "all E are G".;;. "all EM are G" & "all EM are G"
and hence that (4) is equivalent to
(4) an EMG EMconfirms "all EM are G" & "all EM are G"
32
Accordingly, in spite of the validity of (3) and (4), (8*) is invalid, precisely for
the same reason that (6*) is invalid as opposed to (3) and (5), viz., being
another triple of instances of the proper conjunction connotation. Conditional
deductive confirmation (3) transmits to a conjunction with some other hypothesis (4) and this confirmation is as much as that of (3), according to S.2c, but
the confirmation does not transmit to the added conjunctive hypothesis (8*).
In sum, the prima facie absurdity of (5) has a hidden analogue in (4), which
is also due to improper connotations. The invalidity of (8*) is a formal blockade
against confirmation claims that cross a border that may be relevant:
cdconfirmation blockade: for all E, M and G, although an EMG
EM confirms "all EM are G" (3) and even "all E are G" (4), it does
not EMconfirm "all EM are G" (8*).
Whereas the blockade may seem superfluous in the temporal reading, it is clear
that, for instance, in the gender reading it is highly plausible and desirable. A
green male eagle does not (cd)confirm the hypothesis that all female eagles
are green. Awareness of the blockade is, for instance, expressed in the feminist
criticism of male oriented drug research. Results of testing drugs on male
subjects have frequently been extrapolated to women in an irresponsible way
(see e.g., Cotton 1990; Ray et al. 1993).
So far, however, the results of the conditional deductive perspective are
symmetric with respect to green and grue: (4) and (5) on the one hand, and
(6*) and (8*) on the other. Moreover, (3) is a common feature of both. Hence,
the question remains to account for the asymmetric basic intuition. The foregoing analysis shows that an additional assumption is needed to create an
asymmetric situation.
There are at least two possible ways. In the first way, a hypothesis is added
which removes the blockade, at the expense of the grue hypothesis. In the
second way, the blockade is not removed, but the grue hypothesis is downgraded, without excluding it.
Asymmetry by an extra assumption
for all colors C, "all EM are C" implies "all E are C",
and hence "all EM are C", and vice versa.
33
This is in agreement with the intuition behind the grue problem that the
artificial time barrier will not change the color. However, according to the
cdanalysis, this is only guaranteed when we take this formally into account
by the auxiliary assumption SIA. In conjunction with SIA it is even guaranteed
that an EMG not only falsifies "all EM are B", and hence "all E are B", but
also "all EM are B", for the latter and SIA now imply "all E are B", and hence,
in view of (7), an EMG falsifies "all E are Q". Hence, instead of the invalid
confirmation claim (6*) that an EMG EMconfirms "all EM are B", we may
now even conclude that an EMG 'SIAfalsifies' that hypothesis in the plausible
sense that (EMG & SIA) is incompatible with the hypothesis:
(6SIA)
34
It is clear that SQ.2c is a straightforward application of P.2c, using the fact that
an EMG EMconfirms both hypotheses. Moreover, in the emerald version, it
is safe to assume as background belief the following weak irrelevance
assumption:
WIA(emeralds):
for all colors C and C', C =f: C', "all E are C" is (much)
more plausible than the conjunction "all EM are C"
& "all EM are C'" (which is equivalent to "all E are
Q" when C = G and C' = B)
In view of the fact that (4) and (5) hold, we may apply the general principle
of comparative symmetry (PCS) and P.2c, or its application SQ.2c, to our
background belief WI A, which directly leads to the asymmetric cdexplication
of the refined intuition, that is, (4&5).
Note that SIA implies WIA as soon as we assume that SIA amounts to the
implication that gruelike hypothesis lack any plausibility, whereas greenlike
hypothesis have at least some plausibility. Hence, in the light of SIA, (4&5)
provides an asymmetry additional to that between (6SIA) and (8SIA).
In the gender reading, however, only WIA may have some plausibility, but
not SIA. That is, it may well be that we would like to subscribe to the
background belief that the green hypothesis is (much) more plausible than the
grue hypothesis, without excluding the latter. The reason would be, of course,
that a systematic color difference between the sexes of a species regularly
35
occurs, though supposedly not as frequently as sex irrelevance for color. Even
in the temporal reading, WIA is defensible, and SIA not. It is surely the case
that, as far as we know, there are no types of stones that have changed color
at a certain moment in history. However, this does not exclude the possibility
that this might happen at a certain time for a certain type of stone, by some
cosmic event. To be sure, given what we know, any hypothesis which presupposes the color change is much less plausible than any hypothesis which
does not.
It is important to note that we cannot simply take as an assumption that
"all E are G" is more plausible than "all E are Q", that is, without reference
to background beliefs, but only with the motivation that the former generalization expresses more uniformity or continuity of nature than the latter. That is,
it may seem that adding "all EM are G" to the common generalization of "all
E are G" and "all E are Q", viz., "all EM are G", giving rise to "all E are G",
is more in line with that common generalization than adding "all EM are B",
giving rise to "all E are Q". If one thinks this way one does so because one
assumes that EMG is more similar to EMG than EMB. However, this 'uniformity argument' hinges upon the particular E/ M/Glanguage. As is wellknown
from the discussion of the grue problem (and of the problem of language
dependence of definitions of verisimilitude, see (Zwart 1995 such arguments
are language dependent. More specifically, in the E/ M/GIanguage the suggested uniformity argument would imply that "all E are G" is, for example,
more plausible than "for all E: M iff G" . However, in the E/ M/ Xlanguage,
with X =defM iff G, the suggested uniformity argument would imply the
opposite plausibility claim, viz., that "all E are X" is more plausible than "for
all E: M iff X". Hence, reference to background beliefs is unavoidable.
In sum, the basic intuition can be justified in terms of cdconfirmation in
two ways. In the first way, a strong assumption is added which excludes the
grue hypothesis. In the second way, the basic intuition is refined by downgrading the grue hypothesis, without excluding it. Both are very much in the spirit
of Goodman's entrenchment approach. Although Goodman's specific example
of grue primarily suggests the first, dichotomous way, since green is, and grue
is not, wellentrenched (in the temporal reading). However, his general exposition in terms of more or less entrenched predicates primarily suggests the
second, gradual way. As has been noted, the first way is a kind of extreme
version of the second. 8 Hence, the above analysis is highly congenial to
Goodman's informal account in terms of entrenchment. However, the
cdanalysis localizes in formal detail the symmetric point of departure for two
asymmetric explications, a stronger and a weaker one.
2.2.3. Objections to (conditional) deductive confirmation
In the literature several objections have been expressed to the very idea
of (conditional) deductive confirmation. Hence, our specific account of
(un)conditional deductive confirmation is also subject to them. The reader is
36
37
38
2. 3. ACCEPTANCE OF HYPOTHESES
Let us, finally, briefly deal with the problem of the acceptance of hypotheses.
Explicating the idea of deductive confirmation of a hypothesis is one thing,
explicating the idea of being sufficiently confirmed to be accepted is another.
For the latter issue it is important to recognize that a hypothesis may be highly
confirmed, without having become very plausible, since it may have been very
implausible at the start. For acceptance we need something like 'being sufficiently confirmed to have become sufficiently plausible to be accepted'. If it
was already plausible at the start, the acquired confirmation may have been
not very important for this purpose. Hence, crucial is 'being sufficiently plausible
for being accepted'. Acceptance criteria may depend on the nature of the
hypothesis: is it of an individual or a general nature? does it contain theoretical
terms? etc. Moreover, they will depend on one's epistemological position, for
there are, of course, various types of being accepted, roughly corresponding to
the epistemological positions. To begin with the latter, for the realist acceptance
of a hypothesis amounts to accepting the hypothesis as literally true, including
its observational, referential and theoretical consequences. The referentialist
will drop the theoretical consequences, and the empiricist, in addition, the
referential consequences. Finally, the instrumentalist will drop the observational
consequences concerning nonintended applications. In all cases, acceptance of
a hypothesis means that it is to be added to the body of background beliefs,
of which the general status, of course, also depends on the relevant epistemological position.
Within each of the above mentioned types of accepting a hypothesis as true
we could also distinguish between at least five 'kinds of truth': true simpliciter,
approximately true, the truth, near to the truth, nearer to the truth than another
hypothesis. The above, epistemologically induced, qualifications were primarily
intended for the first and the third kind of truth, that is, 'true simpliciter' and
'the truth'. The fifth kind will be explicated for the various epistemological
types in great detail in later chapters. Although we have used, and will use,
informally expressions referring to the second and the fourth kind of truth,
such as 'approximately true' and 'near to the truth' themselves, we will not try
to give precise explications of them. It is clear that 'approximately true' would
need some conventional threshold for deviations from being true. Similarly,
'near to the truth' would need some threshold for being sufficiently near to the
truth. Of course, these five kinds have each two modes, the actual mode and
the nomic one. However, this will only become relevant in later chapters.
Turning to 'true simpliciter' and 'the truth', and assuming that the hypothesis
is of a general nature, with general test implications in different directions, and
using theoretical terms, the three positions beyond the instrumentalist one
(which we will further neglect as far as acceptance is concerned) have to make,
sooner or later, inductive jumps or, simply, inductions of at least four kinds 12 ,
that is, from a threshold plausibility for each of the distinguished epistemological senses to the corresponding type of truth. The empiricist has to make
39
40
Concluding remarks
In the chapters to come, several new matters concerning hypothesis testing will
be dealt with. Chapter 3 mainly deals with a pure quantitative theory of
confirmation, briefly discusses the severity of tests. Chapter 5 will include a
detailed analysis of the derivation of test implications, and the complications
arising from, amongst others, auxiliary hypotheses. Moreover, testability will
be defended as a necessary condition for being an empirical hypothesis. At the
end of Chapter 7 we will have something to say on the nature and role of
novel facts, ad hoc repairments and crucial experiments.
In the course of these chapters it will become clear that the role offalsification
and confirmation has to be relativized in several respects. Prima facie falsification may be disputed in several ways, e.g., by questioning the description of
the counterexample or the truth of the auxiliary hypotheses needed to derive
the relevant test implication (Section 5.2.). More fundamentally, it will turn
41
out that 'being false', and 'being true' for that matter, is from the point of view
of empirical progress and truth approximation rather irrelevant, hence falsification will have to play a more modest role than frequently is assumed
(Section 6.2. and Chapter 7). We will also see that the realist may even claim
against the empiricist that one theory may be closer to the truth in the
encompassing theoretical sense than another, even though the first has some
counterexamples which are no counterexamples to the second (Chapter 9).
Similarly, the role of confirmation will be relativized along the same lines
and roughly at the same places. Since 'confirmation' has the connotation of
not yet being falsified, that is, the hypothesis may still be true, and since it will
turn out to make perfectly sense to continue the 'HDevaluation' of a theory,
even though it has been falsified, the confirmation of a theory is not so
important, but the more general notion of obtaining (general) successes is very
important (Chapter 5 and 6). Prima facie successes may be disputed in similar
ways as prima facie falsification. Moreover, the obtainment of a success plays
a modest role similar to that of a counterexample. However, now the realist
cannot claim that a theory can be closer to the theoretical truth than another
despite the fact that the other has one or more extra successes.
Although the role of confirmation and falsification will be strongly relativized,
this does not mean that there is no need of a qualitative theory of deductive
confirmation and falsification, as developed in this chapter. On the contrary,
the notion of confirmation and falsification remain of crucial importance for
testing at least three types of hypotheses: (1) general test implications and
similar general observational hypotheses (Section 5.1.), (2) comparative success
hypotheses (Chapter 6), and (3) truth approximation hypotheses (Chapter 7,
9 and 10). Although it will turn out that testing truth approximation hypotheses
presupposes testing comparative success hypotheses and this on its turn testing
general test implications, it may, however, not be concluded that there is a
strong direct relation between confirmation and truth approximation. On the
contrary, as suggested before, it will turn out that there is no direct link between
'being true or false' and 'truth approximation'. This does not exclude that there
is some sophisticated link between confirmation and truth approximation, but
this will not be explored in this book (see Festa 1999a, Section 413 ) .
The three indicated remaining crucial roles of confirmation and falsification
highlight some main features of the shortterm dynamics of science, that is, the
separate and comparative evaluation of theories, to be presented in Part II,
and the testing of truth approximation claims, to be presented in Part III and
IV Hence, we will have ample occasion to refer to the present chapter, but we
will only do so when it is particularly illuminating. However, as we have seen
in Section 2.3., there remains of course the possibility that, after a small or
large number of theory transitions along such evaluation lines, we have arrived
at a theory of which we may seriously conjecture that it is observationally,
referentially or even theoretically true (in a weak or strong sense, to be specified). Then confirmation and falsification guide again the choice between
42
rejection of that conjecture or acceptance, where the latter may concern observational, referential or theoretical induction, respectively, and may pave the
way for the longterm dynamics by providing the means for enlarging the set
of observation terms. In the process of sorting out theories with the same claim,
the background beliefs, determining the initial plausibility of evidence and
hypotheses, playa crucial role.
The main topics of the next two chapters concern a general quantitative
theory of confirmation and a survey of a coherent set of inductive specifications.
Since they will only playa marginal role in the remainder of this book, these
chapters could be skipped for the first reading, if one is not interested in the
very idea of quantitative confirmation.
3
QUANTITATIVE CONFIRMATION, AND ITS
QUALITATIVE CONSEQUENCES
Introduction
In the previous chapter, we have developed a qualitative (classificatory and
comparative) theory of deductive confirmation, guided by the success perspective. In this chapter we will present, in Section 3.1., the corresponding quantitative theory of confirmation, more specifically, the corresponding probabilistic
theory of confirmation of a Bayesian nature, with a decomposition in deductive
and nondeductive confirmation. It is again pure in the sense that all equally
successful hypotheses profit from their success to the same degree. It is inclusive
in the sense that it leaves room for confirmation of hypotheses with zero
probability (pzero hypotheses). In Section 3.2. the resulting qualitative theory
of (general) confirmation, encompassing the qualitative theory of deductive
confirmation, will be indicated. Finally, in Section 3.3. we will briefly discuss
the acceptance of hypotheses in the light of quantitative confirmation. In
Appendix 1, it will be argued that Popper's quantitative theory of corroboration
amounts to an inclusive and impure Bayesian theory of confirmation. In
Appendix 2 the quantitative treatment of the raven paradoxes resulting from
our quantitative theory is compared in detail with an analysis in terms of the
standard Bayesian solution as presented by Horwich.
The quantitative approach to confirmation has a somewhat dubious character, since the assigned probabilities are, as a rule, largely artificial. Their main
purpose is to lead to adequate qualitative (classificatory and comparative)
judgments of confirmation. As far as deductive confirmation is concerned, we
have seen in the previous chapter that we do not need a quantitative approach
for that purpose. However, since to date no independent or direct qualitative
theory of general confirmation, or of nondeductive confirmation, has been
developed, a quantitative approach is required for that purpose. Such a dependent or indirect qualitative theory of general and nondeductive confirmation
will be presented in the second section.
Accordingly, we do not claim that the quantitative theory reflects quantitative
cognitive structures concerning confirmation. Instead, they should primarily
be conceived as quantitative explications of qualitative cognitive structures, to
be used only for their qualitative consequences. As will be argued, the justification of these qualitative consequences is at least as good as the justification of
43
3. 1. QUANTITATIVE CONFIRMATION
Introduction
=>
=>
=>
=>
p(E/H) = 1
p(E/H) = 0
p(E/ , H) = 1
p(E/ ,H)=O
Deductive Confirmation:
Falsification:
Deductive Disconfirmation:
Verification:
DC(H, E)
F(H, E)
DD(H,E)
V(H, E)
Here we assume that there is some defensible probability function p, i.e., p may
well have subjective features, though then as much as possible in agreement
with objective information. In line with Bayesian philosophers of science
(Howson and Urbach 1989; Earman 1992), we will call p(E/H) and p(E/ ,H)
likelihoods.2
46
:t' 1
LfJ
....;;..
Q.
Deductive Disconfirmation
Nondeductive
Disconfirmation
+=0
(j0
;;::
c:
0
+=0
c:
0
as
E
...
c:
0~
as
~,~
0
~
~;
!E
O)
~'li
0":>
as
u.
>
+=0
0
;:,
"0
0)
Cl
Nondeductive
Confirmation
Verification
==> p(E/H)
soon as we equate plausibility with probability, Of course, the criterion for 'no
confirmation' or neutral evidence is:
p(E) = p(E/H)
To get a better view on extreme cases, represented by the sides of CS, and
of the nonextreme cases, represented by the interior, we formulate first equivalent criteria of confirmation, disconfirmation and neutral evidence.
Confirmation:
C(H, E)
Disconfirmation:
D(H, E)
NC(H, E)
Nondeductive Disconfirmation:
ND(H, E)
Proper Confirmation:
PC(H, E)
Proper Disconfirmation:
PD(H, E)
D(H, E)
NC(H, E; C)
PC(H,E; C)
conditional Confirmation:
C(H, E; C)
48
H is a pzero hypothesis
p(H) = 0
H is a pone hypothesis
p(H) = 1
H is a pnormal hypothesis
O<p(H)<1
H is a puncertain hypothesis
p(H) < 1
p(H) = p(H/E)
(see e.g., Carnap 1963 2 , the new foreword to Carnap 1950, Horwich 1982,
Howson and Urbach 1989). This confirmation criterion, stating that the posterior probability is larger than the prior probability, may be said to be, not
success, but truthvalue oriented. It will be called, more neutrally, the
F(orward)criterion. It is in perfect agreement with the common sense idea,
expressed in the reward principle of plausibility (RPP) in the previous chapter,
that confirmation, normally, increases, or leads to the increase of, the probability of the hypothesis. Assuming that H is pnormal, the F criterion is
equivalent to the Scriterion, C(H, E), as is easy to check. In view of the
"p(E/ , H) < p(E/H),,version of the Scriterion, its decomposition of Bayesian
confirmation amounts to the following claim: assuming pnormality of H, the
Fcriterion expressing Bayesian confirmation can be naturally decomposed
into three mutually exclusive and together exhaustive possibilities in
which the (equivalent) Scriterion can be satisfied: two extreme possibilities,
viz., verification (0 = p(E/ , H) < p(E/ H)) and deductive confirmation
(p(E/ I H) < p(E/ H) = 1), and the nonextreme possibility, viz., nondeductive
confirmation (O<p(E/ ,H)<p(E/ H)< 1).
The important difference is that the Scriterion is also nontrivially applicable
to pzero hypotheses. Whereas the Fcriterion makes all evidence neutral with
respect to pzero hypotheses (for p(H) = 0 implies p(H/E) = 0), the Scriterion
leaves perfectly room for confirmation of such hypotheses. However, since
49
p(HjE) remains 0, the confirmation is, as it were, not rewarded in this case.
Note that the situation is different for pone hypotheses. If p(H) = 1 then,
assuming that E and H are compatible, p(HjE) = p(H) and p(EjH) = p(E).
Hence, according to both criteria, pone hypotheses cannot be confirmed. Note
in this connection also that, in contrast to the fact that the confirmation of a
pnormal hypothesis amounts to the disconfirmation of its negation, the confirmation of a pzero hypothesis, according to the Scriterion, of course, does
not amount to the disconfirmation of its negation according to any of the two
criteria, which is easy to check. In view of the deviating behavior of the
Scriterion regarding pzero hypotheses, the Scriterion will be called inclusive
and the Fcriterion noninclusive. Hence, although the inclusive and the noninclusive criteria are equivalent for the nonzero cases, they are incompatible
for the zero cases. As we will see in Appendix 1, Popper's approach (Popper
1959, 1963a, 1983) also presupposes the Scriterion, and hence is inclusive.
Inclusive behavior is very important in our opinion. Although there may be
good reasons (contra Popper, see Appendix (1) to assign sometimes nonzero
probabilities to genuine hypotheses, it also occurs that scientists would sometimes assign in advance zero probability to them and would nevertheless
concede that certain new evidence is in favor of them.
Whereas deductive confirmation has only one 'cause', the evidence is entailed
by the hypothesis, nondeductive confirmation may have different causes. In
the following we will restrict the attention to pnormal hypotheses and evidence.
As Salmon (1969) already pointed out in the context of the possibilities of an
inductive logic, a probability function may be such that E confirms H when H
partially entails E. Here 'partial entailment' essentially amounts to the claim
that the relative number of models in which E is true on the condition that H
is true is larger than the relative number of models in which E is true without
any condition. 6 For instance, a 'high outcome' (4, 5, or (6) with a fair die,
partially entails an even outcome, and vice versa. Both probabilistic criteria
lead to confirmation, since, e.g., p( 4v5v6j2v4v6) = 2/ 3 > 1/ 2 = p( 4v5v6). In a
'color language' with at least four colors, p will be such that the evidence that
a raven is black or white confirms the hypothesis that it is black or red. In
general, one may require that a probability function satisfies the principle of
partial entailment: if H partially entails E (..., E) then E confirms (disconfirms)
H. Fortunately, it seems that a probability function usually satisfies this principle. However, and this was Salmon's main message, it is not at all guaranteed
that such a function is such that E confirms H when H essentially is an
(inductive) extrapolation of E, notably from past to future instances of a certain
kind. For instance, one might like to have that the evidence that the first raven
is black confirms the hypothesis that the second raven is black as well. In
general, one may require that a probability function satisfies the principle of
extrapolation (or induction): if H extrapolates upon E (..., E) then E confirms
(disconfirms) H.7 In the next chapter we will study probability functions, e.g.,
Carnap's continuum of inductive methods, which satisfy both principles. Of
50
course, such functions are such that a hypothesis H which partially entails E
and extrapolates upon E is confirmed by E. In sum, we may distinguish at least
three causes or types of nondeductive confirmation: due to partial entailment,
which might be called 'partial (deductive) confirmation', due to extrapolation,
to be called 'inductive confirmation', and due to both factors. In Chapter 4 we
will introduce a third factor: analogy.
3.1.2. The ratiodegree of confirmation
Although the quantitative theory of confirmation presented thus far already
allows qualitative judgments of deductive and nondeductive confirmation, for
comparative purposes we also need a degree of confirmation. In the previous
chapter we have explicated 'confirmation' qualitatively as increase of plausibility of, in the first place, the evidence (SOC), and, in the second place, of the
hypothesis (RPP). In the present probabilistic context, it is plausible to identify
plausibility with probability, and hence, confirmation with increase of probability of the evidence, as we have noted, with the consequence, as far as pnormal hypotheses are concerned, that confirmation is rewarded by an increase
of the probability of the hypothesis.
There are many possibilities for defining a degree of confirmation, several
having some prima facie plausibility.s In the introduction we have already
remarked that, as long as one uses the probability calculus, it does not matter
very much which confirmation theory one chooses, and hence which degree of
confirmation, the only important point is to always make clear which one one
has chosen. In this section, we will restrict our attention to mainly one degree
of confirmation, viz., the ratio degree of confirmation, with some reference to
the standard and nonstandard difference degree of confirmation. 9 Let us begin
by the latter, d(H, E) = defp(H/E)  p(H), that is, the difference between the
posterior and the prior probability of the hypothesis. From the success perspective, d'(H, E) = defp(E/ H)  p(E) is an at least as plausible difference measure
for it expresses in a way to what extent E is a success of H. Since they usually
give different values one has to choose between them.
The ratio degree of confirmation is usually presented as the ratio of the
posterior and the prior probability, p(H/E)/p(H). However, from the success
perspective, the ratio p(E/ H)/p(E) is at least as plausible as indicator of the
extent to which E is a success of H. The latter ratio may well be called the
amount or degree of success of H on the basis of E. Fortunately, now we do
not have to choose, for the two ratio measures are trivially equivalent, when
they are defined, hence we define:
r(H, E) = defp(H/E)/p(H)) = p(E/H)/p(E)
=
p(H&E)/(p(H)p(E))
51
same holds for the second and the third ratio when p(E) = O. Since p is, as
a rule, not just an objective probability, both possibilities should not be
excluded beforehand. Recall that we have assumed that p(E/ H) can be
interpreted when p(H) = 0, and that p(H/ E) can be interpreted when p(E) =
O. Hence, r(H, E) is almost always defined, that is, it is always defined,
except when both p(E) and p(H) are zero, or when one of them is 0 such
that the corresponding conditional probability cannot be interpreted, possibilities that will further be disregarded.
In the following, we will evaluate the rdegree of confirmation in some
detail, partly in comparison with the ddegree and the d'degree. To begin
with, being almost always defined need not be a positive feature, that
depends on the values that are assigned. For a first major advantage of r
over d and d' we study their extreme behavior. Note first that r has the
neutral value 1 and that d and d' both have the neutral value O. Higher
values indicate, of course, confirmation and lower values disconfirmation.
Let us see what happens under the extreme conditions that p(H) or p(E) is
zero. When p(H) = 0 d gets the neutral value. Hence d reflects the Fcriterion
of confirmation, according to which a pzero hypothesis is always neutrally
confirmed. That is, a hypothesis that is impossible according to p cannot be
confirmed or disconfirmed by evidence; all evidence is, by definition, neutral
for such hypotheses, a very strange situation indeed. Similarly, d' gets the
neutral value whenever p(E) = O. So, according to d' evidence that is
impossible according to p cannot confirm nor disconfirm a hypothesis, but
is always neutral. Note that in both cases, it would be less objectionable
when the degree of confirmation would not be defined. It is the assignment
of the neutral value which is conceptually unattractive.
It is easy to check that r may well assign a nonneutral value when either
p(H) or p(E) is zero (assuming that p(E/ H), respectively p(H/ E), can be
interpreted), and, as already remarked, it is undefined when p(H) = p(E) =
O. When p(H) and p(E) are both nonzero, r(H, E) reflects both the S and
the Fcriterion of confirmation, it reflects the Scriterion when p(H) = 0 and
the F criterion when p(E) = O. Hence, we may say that the ratiodegree r
shows refined extreme behavior, whereas d and d ' show conceptually implausible extreme behavior.
To be sure, when p(H) = 0 and r(H, E) > 1, r(H, E) expresses confirmation
which is not rewarded, since p(H/ E) remains O. Note that r(H, E) equals
p(E/ H)/p(E/ i H)10 when p(H) = 0, since P(E) then equals p(E/ ,H). Similarly,
when p(E) = 0 and r(H, E) > 1, E is not recognized as confirming evidence,
since p(E/ H) remains O. In the case that r(H, E) > 1 and p(H) and p(E) are
both positive, E is recognized as confirming evidence of H, in the sense that
p(E/ H) has increased with respect to p(E) by the factor r(H, E) , whereas H is
rewarded for that success, in the sense that p(H/ E) has increased with respect
to p(H) by the same factor.
A second feature of the rmeasure is its being a Pincremental measure l l in
52
the sense that it is (or can be written as) a function only of the probabilities
p(HjE) and p(H) which increases with increasing p(HjE) and decreases with
increasing p(H)12 . It may also be called an Lincremental measure in the sense
that it is (or can be written as) a function only of the likelihoods p(EjH) and
p(E) which increases with increasing p(EjH) and decreases with increasing
p(E). Note that d is also Pincremental, but not Lincremental, whereas d' is
Lincremental, but not Pincremental.
Next, the ratio of the rdegrees of confirmation of two hypotheses on the
basis of the same evidence, r(HI, E)/r(H2/E), just equals the ratio of the
likelihoods, p(E/ H1) jp(E/H2)Y This nicely fits the socalled likelihood ratio
approach in statistics to comparing two statistical hypotheses with each other,
assuming an underlying statistical model (see Note 10). Although d' is
Lincremental, it is not easily connectable to this statistical practice.
An important further difference between r and both d and d' is that r is
symmetric, that is, r(H, E) = r(E, H), whereas d and d' are asymmetric: d(H, E)
is unequal to d(E, H), in fact it is equal to d'(E, H), and similarly for d'.
Symmetry is particularly appealing in cases where the hypothesis is of the same
nature as the evidence. Consider, for example, the hypothesis (H) that the
outcome of a fair die will be even in relation to the evidence (E) that the
outcome is larger than 1 and the reverse situation that the evidence reports an
even die (E' = H), and the hypothesis (H' = E) states that the outcome will be
larger than 1. An asymmetric degree of confirmation may imply that E confirms
H more (or less) than E'( = H) confirms H'( = E), and d and d' do so. The
symmetry of r is, of course, directly related to the fact that r(H, E) can be seen
as a degree of mutual dependence between Hand E, since independence is
usually defined by the criterion p(H&E) = p(H)p(E)14.
Some special values of r(H, E) are relatively simple. For instance, r(H, E)
increases from 0, for falsification, via p(E/ H)/ [ 1  p(H )p( , E/H)] for deductive disconfirmation, to 1, for neutral (including tautological) evidence, from
which it increases further, via 1/p(E) for deductive confirmation, to 1/p(H),
for verification. The last value is, moreover, the maximum degree of
confirmation a hypothesis can get, viz., l / p(H) for verification, e.g., when
E = H. Note that this maximum is hypothesis specific, and that we have the
plausible extreme consequence that verification of a pzero hypothesis
amounts to obtaining an infinite degree of confirmation. Similarly, 1/p(E) is
the maximum degree of confirmation certain E can provide for a hypothesis,
viz., by deductive confirmation, with the plausible extreme consequence that
the degree of confirmation in the case of deductive confirmation by pzero
evidence is infinite.
3.1.3. Comparing and composing degrees of confirmation
r(H, E) = r(H*, E)
and
if p(E&E') = 0 then
,p(E)
r(H, EVE) = ptE) + ptE') r(H, E)
p(E' )
Similarly, due to the symmetry of the rdegree with respect to E and H, the rdegree of confirmation of a disjunction of two incompatible hypotheses is the
weighted sum of the degrees of the disjuncts:
Th.3.2
if p(H&H') = 0 then
,
r(H V H, E)
p(H)
p(H')
Let us now turn to conjunctions. Let E and E' be mutually independent pieces
of evidence in general and with respect to H. Then the degree of confirmation
provided by the conjunction is the product of the separate degrees:
Th.4.1
Similarly, again due to the symmetry of the rdegree with respect to E and H,
for prior and posterior mutually independent hypotheses:
ThA.2
= p(H&E/C)/(p(H/C)p(E/C))
It is easy to check that this conditional degree has similar properties to the
unconditional one.
The quantitative theory of confirmation based on r will be called the rtheory of confirmation, and those based on d and d' will be called the d and
the d'theory, respectively.
3.2. QUALITATIVE CONSEQUENCES
Introduction
In this section it will first be argued in some more detail than in Subsection 3.1.3.
and 3.1.4. that the 'rtheory' restricted to deductive confirmation implies the
whole qualitative theory of deductive confirmation presented in Chapter 2. In
this connection it will be particularly illuminating to write out the quantitative
variant of the qualitative solution of the raven paradoxes. This example
58
Finally, it is easy to check that SQ.2c, dealing with fixed evidence, e.g., in the
emerald case, is trivially realized as a special case of (the first part of) Th.2c:
r("all E are Q", G; EM) = r("all E are G", G; EM)
= l/p(G/ EM)
If we are, moreover, willing to express the green/gruecase of the weak irrelevance assumption (WIAemeralds) by the probabilistic assumption:
WIAp(emeralds)
59
#R
#B
#8
Total
c
a+c
Total
b
d
b+d
a+b
c+d
a+b+ c +d
(lp)
(3p)
(4p)
Note that all rvalues, except those in (3p), exceed 1, and that this remains the
case when p(RH) = q = O. It is easy to check that (4p) essentially amounts to
a special case of Th.lc, realizing S#.lcravens.
It should be noted that this analysis deviates somewhat from the more or
less standard Bayesian solutions of the paradoxes. In the light of the many
references to Horwich (1982), he may be supposed to have given the best
version. In Appendix 2 (based on (Kuipers, to appear)) we argue that our
solution, though highly similar, has some advantages compared to that of
Horwich.
61
of new tests', that is, the idea that a new test is more severe than a mere
repetition. It is a specific instance of the more general 'variety of evidence'
intuition. However, it appears to be not easy to give a rigorous explication
and proof of the general intuition (Earman 1992, p. 7779). But for the special
case, it is plausible to build an objective probabilistic model which realizes the
intuition under fairly general conditions. The setup is a direct adaptation of
an old proposal for the severity of test (Kuipers 1983). Let us start by making
the intuition as precise as possible. Suppose that we can distinguish types of
(HD)testconditions and their tokens by means other than severity considerations. E.g., ravens from different regions, and individual drawings from a region.
Any sequence of tokens can then be represented as a sequence of Nand M,
where N indicates a token of a new type, i.e., a new testcondition, and M a
token of the foregoing type, i.e., a mere repetition. Each test can result in a
success B or failure nonB. Any test sequence starts, of course, with N. Suppose
further that any such sequence X n, of length n, is probabilistic with respect to
the outcome sequence it generates. Note that RH is still supposed to imply
that all nsequences result in Bn, and hence that one nonB in the outcome
sequence pertains to a falsification of RH. A plausible interpretation of the
intuition now is that the severity of a X nNsequence is higher than that of a
XnMsequence, that is:
1
>
Th.lG
r(H, E*)
r(H, E)
'=
p(E*/ H) p(E)
p(H/E*)
=
p(E/H) p(E*)
p(H/E)
p(E)
> >p(E*)
Th.l G suggests
P.1G
' '
= ;;::
 =
r(H*, E)
r(H, E)
p(E/H*)
= > > 1 and
p(E/H)

p(H*/ E)
p(H/E)
p(E/H*) p(H*)
p(E/H) p(H)
::...:....'' =
p(H*)
p(H)
> > 
64
Th.2G suggests
P.2G
of P.1G and P.2G, respectively. Since the special principles dealing with ravens
and emeralds concerned specific types of (conditional) deductive confirmation,
we do not need to generalize them.
In sum, the qualitative explication of general confirmation can be given by
SDC, RPP, PCS, P.1G and P.2G. As announced, it is now plausible to define
general confirmation of a nondeductive nature simply as nondeductive general
confirmation.
Although it is apparently possible to give qualitative explications of general
and nondeductive confirmation, we do not claim that these explications are
independent of the corresponding quantitative explications. In particular, we
would certainly not have arrived at P.1G and P.2G without the quantitative
detour. To be sure, this is a claim about the discovery of these principles; we
do not want to exclude that they can be justified by purely nonquantitative
considerations.
Similarly, although it is possible to suggest that the two (qualitative) proper
connotations formulated for deductive confirmation can be extrapolated to
general and nondeductive confirmation, we would only subscribe to them, at
least for the time being, to the extent that their quantitative analogues hold.
However, in this respect the quantitative situation turns out to be rather
complicated, hence, its retranslation in qualitative terms becomes even more
so. Fortunately, the proper connotations looked for do not belong to the core
of a qualitative theory of general confirmation.
Accordingly, although there is no direct, intuitively appealing, qualitative
explication of general and nondeductive confirmation, beyond the principles
SDC, RPP, and PCS, there is a plausible qualitative explication of their core
features in the sense that it can be derived via the quantitative explication. In
other words, we have an indirectly, more specifically, quantitatively justified
qualitative explication of general and nondeductive confirmation.
It is important to argue that its justification is at least as strong as the
justification of the quantitative explication 'under ideal circumstances', that is,
when the probabilities make objective sense. At first sight, it may seem that we
65
that different definitions of the degree of confirmation will not lead to differences
in acceptance behavior, as long as the resulting posterior probabilities are
crucial for the rules of acceptance.
However, pzero hypotheses will not get accepted in this way, since their
posterior probability remains zero. So, let us see what role the rdegree of
confirmation might play in acceptance rules for pzero hypotheses. We have
already remarked that, although it may perfectly make sense to assign nonzero probabilities to genuine hypotheses, it nevertheless occurs that scientists
would initially have assigned zero probability to certain hypotheses, of which
they are nevertheless willing to say that they later have come across confirming
evidence for them, and even that they have later decided to accept them. Now
one may argue that this can be reconstructed in an 'as if' way in standard
terms: if the scientist would have assigned at least such and such a (positive)
prior value, the posterior value would have passed the threshold. To calculate
this minimal prior value, both d(H, E) and r(H, E) would be suitable. However,
only r(H, E) is a degree for which this 'as if' degree would be the same as the
'original' degree, for r(H, E) does not explicitly depend on p(H)25. In contrast
to this feature, the original ddegree of confirmation assumes its neutral value O.
If one follows this path, it is also plausible to look for a general acceptance
criterion that does justice to the, in most cases, relative arbitrariness of the
prior distribution. Let us, for that purpose, first assume that for cases of
objective probability one decided to take as the acceptance threshold 1  e,
with 0 < e < 1/2. One plausible criterion now seems to be the rdegree of
confirmation that is required for the transition from p(H) = e to p(HI E) 2 1  e,
that is, r(H, E) = (1  e)/e. The suggested criterion can be used for pnormal as
well as pzero hypotheses. However, as Jeffrey (1975) rightly remarks, most
genuine scientific hypothesis not only start with very low initial probability,
but will remain to have a low posterior probability. Hence, if e is very small,
they may not pass the threshold. However, passing the threshold is essentially
independently defined from p(H). For deductive confirmation, it is easily
checked to amount to the condition p(E) < e/( 1  e). Hence, for somebody for
whom p(H) = e, deductive success E should be almost as surprising as H itself.
Whether the criterion is useful in other cases has still to be studied.
Concluding remarks
The possibility of a quantitative, i.e., probabilistic, theory of confirmation is
one thing; its status and relevance is another. Although probabilistic reasoning
is certainly practiced by scientists, it is also clear that specific probabilities
usually do not playa role in that reasoning. Hence, in the best instrumentalist
traditions, as remarked before, the required probabilities in a quantitative
account correspond, as a rule, to nothing in reality, i.e., neither in the world
that is studied, nor in the head of the scientist. They simply provide a possibility
of deriving the qualitative features of scientific reasoning.
c(H, E) =
p(EjH)  p(E)
p(,H)p(EjH) + p(E)
which corresponds to P.lG. Condition (v) amounts, in combination with (i) and
Recall that the ddegree of confirmation was impure as well, more specifically,
also favoring plausible hypotheses. Hence, it is no surprise that it also satisfies
the analogues of (vi), P.2lcor and P.2IGcor.
The foregoing comparison suffices to support the claim that the resulting
qualitative theory of Popper roughly corresponds to the impure qualitative
Bayesian theory based on d(H, E) for pnormal hypotheses. However, in contrast to d(H, E), c(H, E) is inclusive.
APPENDIX 2
COMPARISON WITH STANDARD ANALYSIS OF THE RAVEN
PARADOX
(*)
71
Recall that the simplification derives from the plausible assumption that
Introduction
In the previous two chapters we elaborated the ideas of qualitative and quantitative theories of confirmation along the following lines. According to the
hypotheticodeductive method a theory is tested by examining its implications.
As will be further elaborated in the next chapter, the result of an individual
test of a general hypothesis stated in observation terms, and hence of a general
test implication of a theory, can be positive, negative or neutral. If it is neutral
the test was not well devised, if it is negative, the hypothesis, and hence the
theory, has been falsified. Qualitative confirmation theory primarily aims at
further explicating the intuitive notions of neutral and positive test results.
Some paradoxical features discovered by Hempel (1945/1965) and some queer
predicates defined by Goodman (1955) show that this is not an easy task. In
Chapter 2 we have presented a qualitative theory (with comparative component) which could deal with these problems. Quantitative, more specifically,
probabilistic theories of confirmation aim at explicating the idea of confirmation
as increasing probability, odds, or some other measure, due to new evidence.
In Chapter 3 we have seen that one of the Bayesian theories of confirmation,
the rtheory, is satisfactory in that it generates all main features of the qualitative theory.
In this chapter we will deal with a special explication program within the
Bayesian program, viz., the CarnapHintikka program of inductive logic, governed by the idea of learning from experience in a rational way. Carnap (1950)
introduced this perspective and pointed confirmation theory toward the search
for a suitable notion of logical or inductive probability. Generally speaking,
such probabilities combine indifference properties with inductive properties.
For this reason, probabilistic confirmation theory in this style is also called
inductive probability theory or, simply, inductive logic.
Carnap (1952) initiated the program more specifically by designing a continuum of systems for individual hypotheses. It has guided the search for optimum
systems and for systems that take analogy into account.
Carnapian systems, however, assign zero probability to universal hypotheses,
which excludes 'quantitative' confirmation of them. laakko Hintikka was the
first to reconsider their confirmation. His way of using Carnap's continuum
73
74
for this purpose has set the stage for a whole spectrum of inductive systems of
this type. l
The CarnapHintikka program of inductive logic clearly has considerable
internal dynamics; as will be indicated, its results can also be applied in several
directions. However, since Popper and Miller have questioned the very possibility of inductive probabilities, we will start in Section 4.1. with making clear
that a probability function may be inductive by leaving room for inductive
confirmation. In Section 4.2. Carnap's continuum of inductive methods, and
its generalization by Stegmiiller, will be presented. In Section 4.3. Festa's
proposal is sketched for estimating the (generalized) optimum method.
Section 4.4. deals with the approaches of the present author and Skyrms, and
hints upon one by Festa, related to that of Skyrms, and of di Maio, to designing
Carnapianlike systems that take into account considerations of analogy by
similarity and proximity.
Section 4.5. presents Hintikka's first approach to universal hypotheses, a
plausible generalization, Hintikka's second approach, together with Niiniluoto,
and their mutual relations. The section also hints at the extension of such
systems to polyadic predicates by Tuomela and to infinitely many (monadic)
predicates, e.g., in terms of partitions by Zabell. We will conclude by indicating
some of the main directions of application of the CarnaJrHintikka program.
The CarnaJrHintikka program may be seen as a special version of the
standard (i.e., forward) Bayesian approach to quantitative confirmation and
inductive inference. See Howson and Urbach (1989) for a handbook on the
Bayesian approach of confirmation of deterministic and statistical hypotheses,
paying some attention to the CarnaJrHintikka version and to the classical
statistical approach. There are also nonBayesian approaches, from Popper's
degrees of corroboration (see Chapter 3) to L.1. Cohen's 'baconian probabilities'
(Cohen 1977/1991) and the degrees of confidence of orthodox statistics. Cohen
and Hesse (1980) present several quantitative and qualitative perspectives.
4.1. INDUCTIVE CONFIRMATION
75
in the sense defined above, this second fact seems dramatic. Their paper
generated a lot of responses. The impossibility 'proof' of inductive probabilities
is an illustration of the fact that a prima facie plausible explication in terms of
logical consequences needs to be reconsidered in the face of clear cases, i.c.,
inductive probabilities. In Section 8.1. we will show in detail that Popper was
also unlucky with explicating the idea of verisimilitude in terms of logical
consequences and that an alternative explication is feasible. In this chapter we
will present the main lines of some paradigm examples of inductive probabilities
and the corresponding general explication of such probabilities.
The explication of what is typical of inductive as opposed to noninductive
probabilities presupposed in the CarnapHintikka program simply is the satisfaction of the principle of positive instantial relevance, also called the principle
of instantial confirmation or the principle of inductive confirmation:
(lC)
where, phrased in terms of individuals ai' a2 , . , R is a possible property of
them and en reports the relevant properties of the first n (;~ 0) individuals.
Moreover, it may generally be assumed that p(R(an+1)/en) = p(R(a n+ 2)/en ), with
the consequence that (IC) becomes equivalent to
p{R{an +2)/en &R(a n + d) > p(R(a n +2)/e n )
generated by a prior distribution p(H i ) and the likelihood functions P(.jHi) ' If
the prior distribution is noninductive, that is, the improper universal hypothesis
is the only one that gets a nonzero probability, hence 1, the system may
nevertheless be inductive due to the fact that the only relevant likelihood
function is inductive. Carnapsystems amount to such systems. On the other
hand, if the prior distribution assigns nonzero probabilities to all relevant
genuine universal hypotheses the phenomenon of inductive confirmation occurs
already even if the likelihoods are themselves noninductive. This second combination is more or less the standard Bayesian approach. Hence, Carnap deviates
76
77
or Hintikka), the two criteria of general confirmation coincide: p(H/ E) > p(H)
(the standard forward (F)criterion) iff p(E/H) > prE) (the backward or success
(S)criterion), in which case the ratio degree of confirmation r(H, E) may either
be defined as p(H/E)/p(H) or, equivalently, p(E/H)/p(E). However, if p(H) = 0,
e.g., in the case of noninductive priors (Popper and Carnap), the two criteria
deviate; only the Scriterion allows confirmation. In this case the degree of
confirmation was identified with the corresponding ratio, i.e., p(E/H)/p(E),
which is welldefined, assuming that p(E/ H) can be interpreted.
Let us indicate the degree of confirmation of H by E according to p by
r p(H, E), and that according to m by r m(H, E). In view of the ratio structure of
the two degrees it is now plausible to take the ratio of the former to the latter
as the degree of inductive confirmation, rj(H, E) = r p(H, E)/rm(H, E).3 An
attractive consequence of this definition is that the degree of inductive confirmation is simply equal to the degree of confirmation (according to p) when
there is no noninductive confirmation. E.g., assuming (IC), the (backward =
forward) degree of inductive confirmation of R(a.+d bye. according
to p is p(R(a. + d /e.)/ p(R(a. + d. This holds of course also for the corresponding conditional degrees of confirmation: rj(R(a. + 2), R(a. + 1); e.) =
r p(R(a.+ 2 ), R(a. + 1); e.) = p(R(a. + 2)/e.&R(a.+ 1)/p(R(a. + 2)/e.).
The proposed (conditional) degree of inductive confirmation seems rich
enough to express the inductive influences in cases of confirmation when p(H)
and m(H) are both nonzero or both O. In the first case, the F and Scriterion
coincide for both p and m, and the corresponding (conditional) ratio measures
are uniquely determined. In the second case, only the success criterion can
apply, and the corresponding ratio measures can do their job. However, the
situation is different if p(H) is nonzero and m(H) = 04 , which typically applies
in case of an inductive prior, e.g., standard Bayesian or Hintikkasystems. In
Section 4.5. we will explicate how forward confirmation then deviates from
success confirmation.
In the rest of this chapter we will only indicate in a few number of cases the
resulting degree of confirmation simpliciter and the degree of inductive confirmation, in particular when they lead to prima facie surprising results.
78
precisely four colored segments, BLUE, GREEN, RED, and YELLOW, without further information about the relative size of the segments. So you do not
know the objective probabilities. What you subsequently learn are only the
outcomes of successive trials. Given the sequence of outcomes en of the first n
trials, your task is to assign reasonable probabilities, p(R/e.), to the hypothesis
that the next trial will result in, for example, RED.7
There are several ways of introducing the ),continuum, but the basic idea
behind it is that it reflects gradually learning from experience. According to
Carnap's favorite approach p(R/e.) should depend only on n and the number
of occurrences of RED thus far, nR . Hence, it is indifferent to the color
distribution among the other n  nR trials. This is called the principle of
restricted relevance. More specifically, Carnap wanted p(R/e.) to be a special
weighted mean of the observed relative frequency nR/n and the (reasonable)
initial probability 1/4. This turns out to leave room for a continuum of
(C)systems, the ),continuum, 0 <), < 00, for four outcomes:
p(R/e.) = (nR + )./4 )/(n + ).) = (n/(n + ).)) . (nR/n) + (J./(n + },))( 1/4)
Note that the weights n/(n + ).) and )J(n + ).) add up to 1 such that for increasing
n the former gradually increases from 0 to 1 and hence the latter from 1 to O.
Moreover, the larger the value of ), the slower the first increases, at the expense
of the second, i.e., the slower one is willing to learn from experience.
Csystems have several attractive indifference, confirmation and convergence
properties (for a systematic treatment see (Kuipers 1978)). The most important
are these.
79
concerns the hypothesis that, given en, the next result will also be RED
or BLUE, i.e., will be in agreement with the universal hypothesis that
all future results are RED or BLUE, and the second probability concerns
the same hypothesis for the next trial, but now not only given en, but
also that it was followed by RED or BLUE, i.e., a result in agreement
with the universal hypothesis,
 universalinstance convergence, i.e., p(R V B/en ) approaches 1 for increasing n if only RED and BLUE remain occurring according to en'
Though Csystems have the properties of universalinstance confirmation
and convergence, forward confirmation of, and convergence to, universal
hypotheses are excluded. The reason is that Csystems in fact assign zero prior
probability to all universal generalizations, for instance to the hypothesis that
all results will be RED. Of course, this is desirable in the described situation,
but if you were only told that there are at most four colored segments, you
would like to leave room for this possibility. Success confirmation of universal
hypotheses is, of course, not excluded. For simplicity, we will deal with it in
Section 4.5.
Stegmliller (1973) generalized Csystems to (GC)systems, in which the uniform initial probability 1/4 is replaced by arbitrary nonzero values, e.g., p(R),
leading to the special values:
p(R/en )
Carnap (1952) proved that for certain kinds of objective probability processes,
such as a wheel of fortune, there is an optimal value of A, depending on the
objective probabilities, in the sense that the average mistake may be expected
to be lowest for this value. Surprisingly, this optimal value is independent of
n. In fact he proved the existence of such an optimal Csystem for any multinomial process, i.e., sequences of independent experiments with constant objective
probabilities qi for a fixed finite number of k outcomes. The relevant optimal
80
which is easily checked to reduce to ;.... (C) when all fi equal 11k.
Then he formulates for such a class two solutions of the estimation problem,
which turn out to be essentially equivalent. Moreover, one of these solutions
relates the research area of inductive logic to that of truth approximation: the
optimum solution may be expected to be the most efficient way of approaching
the objective or true probabilities. As a matter of fact, the present probabilistic
context provides an example of truth approximation where a quantitative
approach is not only possible, but even plausible (See the end of Subsection 12.3.2. for the precise link).
Unfortunately, wheels of fortune do not constitute a technological (let alone
a biological) kind, for which you can use information about previously investigated instances. But if you had knowledge of a random sample of all existing
wheels of fortune, Festa's approach would work on the average for a new,
randomly drawn, one.
81
82
83
Carnap was well aware that Csystems do not leave room for nonanalytic
universal generalizations. In fact, apart from one type of system in the previous
section, all systems presented thus far have this problem. Although tolerant of
other views, Carnap himself was inclined to downplay, for this and other
reasons, the theoretical importance of universal statements. However, as already
mentioned, if you are only told that the wheel of fortune has at most four
colored segments, you might want to leave room for statements claiming for
example that all results will be RED.
Hintikka (1966) took up the problem of universal statements. His basic idea
for such a system, here called Hsystem, is a Bayesian conditionalized probability system of the following doubleinductive kind: (1) a prior distribution
of nonzero probabilities for (the outcome spaces of) mixed (existential and
universal) statements, called constituents, stating that all and only the colors
soandso will occur in the long run, (2) for each of these constituents an
appropriate Csystem as conditional system, (3) the constituents in (1) and (2)
dealing with sets of colors of the same size (w) get the same prior values, p(R ",),
and the corresponding conditional Csystems not only have the same initial
values (l/w) for all relevant outcomes, but also the same Avalues, A"" (4)
Bayes's theorem for conditionalizing posterior probabilities is applied. This
84
leads, amongst others, to tractable formulas for the special values, such as
p(Rje n), and for the posterior values p(H w/en).
Hintikka, more specifically, proposed a twodimensional system, the socalled
aAcontinuum, by taking a uniform value of ;, for all conditional systems and
the prior distribution proportional to the probability that a (not to be confused
with a in the previous section) virtual individuals or trials are, according to
the Csystem for all possible colors, compatible with the relevant constituent.
Here a is a finite number to be chosen by the researcher. The larger it is chosen,
the lower the probability of all constituents based on fewer outcomes than
possible.
However, as shown in (Kuipers 1978), Hsystems in general have already,
besides the instantial, and universalinstance, confirmation and convergence
properties, the desired property of (forward) universal confirmation, i.e., the
probability of a not yet falsified universal statement increases with another
instance of it. For example, assuming that only RED occurred according to en'
p(HR /en) is smaller than p(HR /enR), where HR indicates the constituent that
only RED will occur in the long run. Hsystems have, moreover, universal
convergence, i.e., the probability of the strongest not yet falsified universal
statement converges to o. For example, p(HR jen) approaches 1 for increasing n
as long as only RED continues to occur. For the special case of Hsystems
belonging to Hintikka's aI,continuum holds that, for increasing parameter a,
universal confirmation is smaller and universal convergence slower.
The conditional degree of (purely) inductive confirmation corresponding to
instantial confirmation is straightforwardly and uniquely defined for Hsystems.
However, the conditional degree of (inductive) confirmation corresponding to
universal confirmation has two sides. Let us first consider the success side
(story). Since p(R/HRe n) = 1, assuming that en reports n times R, the conditional
degree of (backward and deductive) confirmation of H R by R, given en' according to p, denoted by rp(HR' R; en), is p(RjHRen)/p(R/en) = l/p(R/en). The corresponding degree for the mfunction is 1/m(R/en) = 4. Since nR = np(R/en) will
be, due to 'recurring' instantial confirmation, larger than 1/4, rj(H R, R; en) =
m(Rjen)/p(Rjen) = (1 /4 )/p(R/e n) becomes smaller than 1. Although both degrees
report confirmation, both cases even concern in fact deductive confirmation, it
may be surprising at first sight that the 'pdegree' is smaller than the 'mdegree',
but on second thoughts it is not. An inductive probability function invests as
it were its inductive attitude towards uniform evidence a priory, with the
consequence that the explanatory success of a universal hypothesis is downgraded when it is repeatedly exemplified. On the other hand, the noninductive
function m acknowledges each new success of a universal hypothesis as equally
successful as the previous ones. In other words, since both degrees of confirmation may also be seen as degrees of success, it is plausible that this degree is
larger according to the mfunction than according to an inductive pfunction, which by definition anticipates relatively uniform evidence lO Note
that the success confirmation of H R by R, given n times R, occurs of course
85
"*
86
Concluding remarks
In contrast to Lakatos's claim (Lakatos 1968) the CarnapHintikka program
in inductive logic, combines a strong internal dynamics with applications in
several directions. The strong internal dynamics will be clear from the foregoing.
Let us finally indicate some of its intended and unintended applications. Systems
of inductive probability were primarily intended for explicating confirmation
in terms of increasing probability. As for probabilistic Bayesian confirmation
theory in general, one may define a quantitative degree of confirmation in
various ways (see Chapter 3), but we argued for the ratio measure. However
this may be, we still have to address the question whether we need and are
justified to use inductive probabilities for confirmation purposes. For it is one
thing to claim that they can be used to describe or explicate a kind of supposedly
rational behavior towards hypotheses in relation to evidence, this does not
answer the question whether we are justified in using them and whether we
really need them. To begin with the latter, it is clear that the success theory of
confirmation (based on the Scriterion and the ratio degree) applied to the
noninductive probability function m is essentially sufficient to evaluate the
confirmation of H by E. This approach may be called Popperian, although
Popper's own proposal for the degree of corroboration (!) is somewhat different,
and, as we have seen in Appendix 1 of Chapter 3, surprisingly enough, such
that it entails the Mattheweffect. The question whether we are justified in
using inductive probabilities brings us back to the Principle of the Uniformity
of Nature (PUN). If PUN is true in some sophisticated sense, e.g., in the sense
that there are laws of nature of one kind or another, the use of inductive
probabilities makes sense. However, justifying PUN is another story. Hume
was obviously right in claiming that PUN cannot be justified in a noncircular
way. Nevertheless, all living beings happen to have made, in the process of
87
biological evolution, the inductive jump to PUN for everyday purposes with
success. Hence, so far PUN was rewarding indeed. We may also conclude from
this fact that something like a high probability rule of acceptance is defensible.
A remaining question then is how it is possible that Popper, who denounces
PUN but believes in laws of nature (Popper 1972, p. 99), apparently is able to
circumvent the use of inductive probabilities. The point is, of course, that every
time a Popperian scientists makes the, relatively great, jump from merely a
hypothesis to a hypothesis which is, for the time being, accepted as true, the
leap is just not prepared by inductive 'minileaps'. That Popper refused to call
the big jumps he needed inductive is ironical.
Let us finally turn to other than confirmation applications of inductive
probabilities. Carnap (1971a) and Stegmiiller (1973) stressed that they can be
used in decision making, and Skyrms (1991b) applies them in game theory by
letting players update their beliefs with Csystems. Costantini et al. (1982) use
them in a rational reconstruction of elementary particle statistics. Festa (1993)
suggests several areas of empirical science where optimum inductive systems
can be used. Welch (1997) indicates how systems with virtual analogy can be
used for conceptual analysis in general and for solving conflicts about ethical
concepts in particular. Finally, for universal hypotheses, Hintikka (1966),
Hilpinen (1968)13 (see also Hintikka and Hilpinen 1966) and Pietarinen 1972)14
use systems of inductive probability to formulate rules of acceptance, i.e., rules
for inductive jumps, the task of inductive logic in the classical sense. As we
noted already in Section 3.3., their main proposal for the IX},continuum
successfully avoids the lotteryparadox, and could be extended to another class
of systems of inductive probability, in fact to all SH/ NHsystems (Kuipers
1978, Section 8.6.). However, they remain restricted to monadic predicate
languages. As will be indicated in Chapter 12, Niiniluoto (1987a) uses such
systems to estimate degrees of verisimilitude, hence giving rise to a second
fusion of inductive logic and the truth approximation program. IS
With closing this chapter we also close the part about confirmation. Recall
what we stated already at the end of Chapter 2. Although the role of confirmation (and falsification) will be strongly relativized, there is a strong need for
a sophisticated account of confirmation. The reason is that the notions of
confirmation and falsification remain of crucial importance for testing at least
three types of hypotheses, viz., general test implications, comparative success
hypotheses, and truth approximation hypotheses. In the next part we turn to
the first two types of hypotheses and the role of confirmation and falsification
in the separate and comparative evaluation of them.
PART II
EMPIRICAL PROGRESS
INTRODUCTION TO PART II
Confirmation of a hypothesis has the connotation that the hypothesis has not
yet been falsified. Whatever the truth claim associated with a hypothesis, as
soon as it has been falsified, the plausibility (or probability) that it is true in
the epistemologically preferred sense is and remains nihil. Hence, from the
forward perspective, the plausibility of the hypothesis cannot increase. Similarly,
from the success perspective, the plausibility of evidence which includes falsifying evidence, on the condition of the hypothesis, is and remains nihil, and
hence cannot increase. In this part we will elaborate how the evaluation of
theories can nevertheless be proceeded after falsification.
In Chapter 5 the attention is directed at the more sophisticated qualitative
HDevaluation of the merits of theories, in terms of successes and counterexamples, obtained by testing test implications of theories. The resulting evaluation report leads to three interesting models of separate HDevaluation of
theories. Special attention is paid to the many factors that complicate
HDevaluation and, roughly for the same reasons, HDtesting.
In Chapter 6 it is pointed out that the evaluation report resulting from
separate evaluation naturally leads to the comparative evaluation of theories,
with the crucial notion of 'more successfulness', which in its turn suggests 'the
rule of success', which marks empirical progress. It will be argued that this
'instrumentalist' or 'evaluation methodology', by denying a dramatic role for
falsification, and even leaving room for some dogmatism, is methodologically
superior to the 'falsificationist methodology', which assigns a theory eliminative
role to falsification. Moreover, the former methodology will be argued to have
better perspectives for being functional for truth approximation than the latter.
Finally, the analysis sheds new light on the distinction between science and
pseudoscience.
91
5
SEPARATE EVALUATION OF THEORIES BY THE
HDMETHOD
Introduction
In Chapter 2 we gave an exposition of HDtesting, the HDmethod of testing
hypotheses, with emphasis on the corresponding explication of confirmation.
HDtesting attempts to give an answer to one of the questions one may be
interested in, the truth question, which may be qualified according to the
relevant epistemological position l . However, the (theory) realist, for instance,
is not only interested in the truth question, but also in some other questions.
To begin with, there is the more refined question of which (individual or
general) facts 2 the hypothesis explains (its explanatory successes) and with
which facts it is in conflict (its failures), the success question for short. We will
show in this chapter that the HDmethod can also be used in such a way that
it is functional for (partially) answering this question. This method is called
HDevaluation, and uses HDtesting. Since the realist ultimately aims to
approach the strongest true hypothesis, if any, i.e., the (theoreticalcumobservational) truth about the subject matter, the plausible third aim of the HDmethod
is to help answer the question of how far a hypothesis is from the truth, the
truth approximation question. Here the truth will be taken in a relatively modest
sense, viz., relative to a given domain and conceptual frame. In the next chapter
we will make plausible and in Part III prove that HDevaluation is also
functional for answering the truth approximation question.
As we will indicate in a moment, the other epistemological positions are
guided by two related, but more modest success and truth approximation
questions, and we will show later that the HDmethod is also functional for
answering these related questions. But first we will articulate the realist viewpoint in some more detail. For the realist, a hypothesis is a statement that may
be true or false, and it may explain a number of facts. A theory will here be
conceived as a hypothesis of a general nature claiming that it is the strongest
true hypothesis, i.e., the truth, about the chosen domain (subject matter) within
the chosen conceptual frame (generated by a vocabulary). This claim implies,
of course, that a theory claims to explain all relevant facts. Hence, a theory
may not only be true or false, it may also explain more or fewer facts, and it
may even be more or less near the truth. To be sure, presenting the realist
notion of a theory as the indicated special kind of hypothesis is to some extent
93
94
a matter of choice, which will turn out to be very useful. The same holds for
adapted versions of the other epistemological positions.
Let us briefly look at the relevant questions from the other main epistemological viewpoints, repeating the relevant version of the truth question. The constructive empiricist is interested in the question of whether the theory is
empirically adequate or observationally true, i.e., whether the observational
theory implied by the full theory is true, in the refined success question what
its true observational consequences and its observational failures are, and in
the question of how far the implied observational theory is from the strongest
true observational hypothesis, the observational truth. The referential realist is,
in addition, interested in the truth of the reference claims of the theory and
how far it is from the strongest true reference claim, the referential truth. The
instrumentalist phrases the first question of the empiricist more liberally: for
what (sub)domain is it observationally true? He retains the success question
of the empiricist. Finally, he will reformulate the third question: to what extent
is it the best (and hence the most widely applicable) derivation instrument?
The method of HDevaluation will turn out, in this and the following chapter,
to be a direct way to answer the success question and, in later chapters, an
indirect way to answer the truth approximation question, in both cases for all
four epistemological positions. The relevant chapters will primarily be presented
in relatively neutral terminology, with specific remarks relating to the various
positions. In this chapter, the success question will be presented in terms of
successes and counterexamples 3 : what are the potential successes and counterexamples of the theory?
In sum, two, related, ways of applying the HDmethod to theories can be
distinguished. The first one is HDtesting, which aims to answer the truth
question. However, as soon as the theory is falsified, the realist of a falsificationist nature, i.e., advocating exclusively the method of HDtesting, sees this as
a disqualification of any prima facie explanatory success. The reason is that
genuine explanation is supposed to presuppose the truth of the theory. Hence,
from the realistfalsificationist point of view a falsified theory has to be given
up and one has to look for a new one.
However, the second method to be distinguished, HDevaluation, remains to
take falsified theories seriously. It aims at answering the success question, the
evaluation of a theory in terms of its successes and counterexamples (problems)
(Laudan 1977). For the (nonfalsificationist) realist, successes are explanatory
successes and, when evaluating a theory, they are counted as such, even if
the theory is known to be false. It is important to note that the term
'(HD)evaluation' refers to the evaluation in terms of successes and counterexamples, and not in terms of truth approximation, despite the fact that the
method of HDevaluation will nevertheless turn out to be functional for truth
approximation. Hence, the method of HDevaluation can be used meaningfully
without any explicit interest in truth approximation and without even any
substantial commitment to a particular epistemological position stronger than
instrumen tali sm. 4
95
We have already seen, in Chapter 2, that the HDmethod can be used for
testing, but what is the relation of the HDmethod to evaluation? Recall that,
roughly speaking, the HDmethod prescribes to derive test implications and
to test them. In Section 5.1., it is shown that a decomposition of the HDmethod
applied to theories naturally leads to an explication of the method of separate
HDevaluation, using HDtesting, even in terms of three models. Among other
things, it will turn out that HDevaluation is effective and efficient in answering
the success question.
In Section 5.2., socalled falsifying general facts will first be analyzed. Then
the decomposition of the HDmethod will be adapted for statistical test implications. Finally, it is shown that the decomposition suggests a systematic presentation of the different factors that complicate the straightforward application of
the HDmethods of testing and evaluation.
In the next chapter we will use the separate HDevaluation of theories for
the comparative HDevaluation of them. Strictly speaking, only Section 5.1. of
this chapter is required for that purpose.
5.1. HDEVALUATION OF A THEORY
Introduction
The core of the HDmethod for the evaluation of theories amounts to deriving
from the theory in question, say X, General Test Implication (GTI's) and
subsequently (HD)testing them. For every GTI I holds that testing leads
sooner or later either to a counterexample of I, and hence a counterexample
of X, or to the (revocable) acceptance of I: a success of X . A counterexample
implies, of course, the falsification of I and X. A success minimally means a
'derivational success'; it depends on the circumstances whether it is a predictive
success and it depends on one's epistemological beliefs whether or not one
speaks of an explanatory success.
However this may be, from the point of view of evaluation falsification is,
although an interesting fact, no reason to stop the evaluation of the theory.
One will derive and test new test implications. The result of such a systematic
application of the HDmethod is a (time relative) evaluation report of X,
consisting of registered counterexamples and successes.
Now, it turns out to be very clarifying to write out in detail what is implicitly
wellknown from the work of Hempel and Popper, viz., that the HDmethod
applied to theories is essentially a stratified, twostep method, based on a
macro and a microargument, with much room for complications. In the
already indicated macrostep, one derives GTI's from the theory. In their turn,
such GTI's are tested by deriving from them, in the microstep, with the help
of suitable initial conditions, testable individual statements, called Individual
Test Implications (ITI's).
In this section we will deal with the macroargument, the microargument,
96
and their combination into three models of (separate) evaluation. In the second
section special attention will be paid to socalled falsifying general hypotheses,
to statistical test implications, and to complications of testing and evaluation.
97
that is, for all x in the domain D, satisfying the initial conditions C(x), the fact
F(x) is 'predicted'.
GTI: I
Scheme 5.1. The macro H Dargument
Let us now concentrate on the results of testing GTI's. When testing a GTI of
a theory, we are interested in its truthvalue, hence we use the test terminology.
Successive testing of a particular GTI I will lead to two mutually exclusive
results. The one possibility is that sooner or later we get falsification of I by
coming across a falsifying instance or counterexample of I, i.e., some Xo in D
such that C(xo) and notF(xo), where the latter conjunction may be called a
falsifying combined (individual) fact.
Assuming that LMC is correct, a counterexample of I is, strictly speaking,
also a counterexample of X, falsifying X, for not only can notI be derived
from the falsifying combined fact, but also notX by Modus Tollens. Hence,
from the point of view of testing, it is plausible to speak also of falsification of
the theory. However, it will frequently be useful in this chapter, and perhaps
more in line with the evaluation terminology, to call the counterexample less
dramatically a negative instance, and further to speak of a negative (combined)
individual fact, or simply an individual problem of X.
The alternative possibility is that, despite variations in members of D and
ways in which C can be satisfied, all our attempts to falsify I fail, i.e., lead to
the predicted results. The conclusion attached to repeated success of I is of
course that I is established as true, i.e., as a general (reproducible) fact.
Now one usually calls the acceptance of I as true at the same time a
confirmation or corroboration of X , and the realist will want to add that I has
98
99
Let us have a closer look at the testing of a general test implication, the microstep of the HDmethod. To study the testing of GTI's in detail, it is plausible
to widen the perspective to the evaluative point of view on GTI's and to neglect
their derivability from X.
Let us call a statement satisfying all conditions for being a GTI, except its
derivability from some given theory, a General Testable Conditional (GTC).
Let G be such a statement, which of course remains of the same form as a
GTI:
G: for all x in D [if C(x) then F(x)]
To evaluate G we derive from G for some Xo in D and suitable initial conditions
(IC), viz., C(xo), the predicted individual fact or individual prediction F(xo),
i.e., an Individual Test Implication (ITI). It is an individual prediction in the
sense that it concerns a specific statement about an individual item in the
domain, as do the relevant initial conditions. It is a prediction in the sense that
the outcome is assumed not to be known beforehand. Hence, talking about a
prediction does not imply that the fact itself should occur later than the
prediction, only that the establishment of the fact has to occur later (leaving
room for socalled retrodictions).
What is predicted, i.e., F(xo), is usually called an (individual) effect or event.
Both are, in general, misleading. It may concern a retrodiction (even relative
to the initial conditions), and hence it may be a cause. And it may concern a
state of affairs instead of an event. For these reasons we have chosen the neutral
term (predicted) individual 'fact'.
In Scheme 5.2. the microreasoning of the HDmethod, the micro
HDargument, is represented, where UI indicates Universal Instantiation.
G: for all x in D [if C(x) then F(x)]
Xo in D
_______________________ UI
_______________________ MP
ITI: F(x o)
Scheme 5.2. The micro H Dargument
100
The specific prediction posed by the individual test implication can come true
or proven to be false. If the specific prediction turns out to be false, then,
assuming that the initial conditions were indeed satisfied, the hypothesis G has
been falsified. The combined individual fact "C(xo) & notF(xo)", "Co & notFo" for short, may be called a falsifying individual fact and Xo a falsifying
instance or counterexample of G. It will again be useful for evaluative purposes
to speak of a negative instance, and a negative (combined) individual fact or
simply an individual problem of G.
If the specific prediction posed by the individual test implication turns out
to be true, we get the combined individual fact "Co & F0'" which is not only
compatible with (the truth of) G, but is, not in full, but partially derivable from
G in the following sense (implying partial entailment by G, in the sense of
Section 3.1.). One of its conjuncts can be derived from G, given the other.
Again, one may be inclined to talk about confirmation 7 or even about explanation. However, given that we do not want to exclude that G has already been
falsified, we prefer again the neutral (evaluation) terminology: Xo is called a
positive instance and the combined individual fact "Co & F 0" a positive (combined) individual fact or simply an individual success of G.
It is easy to check that the same story can be told about "notCo & notFo"
for some Xo in D, by replacing the role of "Co" by that of "notFo", the new
initial conditions, and the role of "Fo" by that of "notCo", the new individual
test implication. The crucial point is that "if Co then F 0" is logically equivalent
to "if notFo then notCo". By consequence, Xo is a positive instance satisfying
"notCo & notFo", being a positive individual fact or an individual success of G.
101
The remaining question concerns how to evaluate the fourth and last combined individual fact "notCo & Fo" concerning some Xo in D. Of course, this
fact is compatible with G, but none of the components is derivable from G and
the other component. Hence, the fourth combined fact cannot be partially
derived from G. Or, to put it differently, none of its components, taken as
initial condition, can lead to a negative instance, whereas this is the case for
(precisely) one of the components in the two cases of partially derivable facts.
Hence the terms neutral instance and neutral (combined) individual fact or
neutral result are the proper qualifications.
Consequently, the evaluation report of GTe's has, like the evaluation reports
of theories, two sides; one for problems and the other for successes. Again, they
form partial answers to the success question now raised by the GTe. However,
here the two sides list entities of the same kind: negative or positive instances
or individual facts, that is, individual problems and individual successes,
respectively.
It is again clear that the micro HDargument for a GTC G is effective and
efficient for making its evaluation report: each test of G either leads to a positive
instance, and hence to an increase of G's individual successes, or it leads to a
negative instance, and hence to an increase of G's individual problems. It does
not result in neutral instances. Note that it is crucial for this analysis that
GTe's have, by definition, a conditional character.
What we have described above is the micro HDargument for evaluating a
GTC. When we restrict attention to establishing its truthvalue, and hence stop
with the first counterexample, it is the (micro) HDargument for testing the
GTC.
102
103
that the models of HDtesting produce the test reports in an effective and
efficient way for the same reasons as the models of HDevaluation do:
HDtesting leads to successes or problems, and not to neutral results. As
already suggested, the exclusive interest in HDtesting of theories will be called
the (naive) falsificationisf perspective or method.
Table 5.1. Methodological categories of theory evaluation and testing
Problems
Successes
Individual
individual problem
negative instance
negative individual fact
counterexample
individual success
positive instance
positive individual fact
General
general problem
negative general fact
falsifying general fact
general success
positive general fact
Table 5.1. summarizes the four methodologically relevant categories and their
terminological variants. It is easy to read off the four possible models of
HDevaluation and HDtesting.
5.2. FALSIFYING GENERAL HYPOTHESES, STATISTICAL TEST
IMPLICATIONS, AND COMPLICATING FACTORS
Introduction
In this section we will first deal with socalled falsifying general hypotheses,
that is, general problems, summarizing individual problems, i.e., counterexamples. Then we will show that the main lines of the analysis of testing and
evaluation also apply when the test implications are of a statistical nature.
Finally, we will deal with all kinds of complications of testing and evaluation,
giving occasion to dogmatic strategies and suggesting a refined scheme of
HDargumentation.
5.2.1. Falsifying general hypotheses and general problems
In this subsection we pay attention to a methodological issue that plays an
important role in the methodology of Popper and others: socalled falsifying
general hypotheses. In contrast to their dramatic role in HDtesting of theories,
in HDevaluation they play the more modest role of a general problem which
summarizes individual problems on the minus side of the evaluation report of
a theory.
Let us return to the evaluation or testing of a general testable conditional
(GTC) G. Finding a partially implied instance of G does, of course, not imply
that G is true. Repetitions of the individual tests, varying the different ways in
which the initial conditions can be realized, are necessary to make it plausible
104
can be established, with C* implying C and F* implying notF, such that each
partially derivable individual fact of G* of type "C*&F*" is a falsifying instance
of G. In that case, G* may be called a falsifying or negative general fact for G.
When G is a GTI of X, G* may also be called a lower level falsifying (general)
hypothesis, to use Popper's phrase, now contradicting not only G but also X.
However, more in line with the terminology of evaluation, we call it a negative
general fact or general problem of X.
Example: An example is the law of combining volumes (GayLussac).
Apart from by Dalton himself, it was generally considered as a problem
for Dalton's version of the atomic theory. It can easily be reconstructed
as a general problem in the technical sense defined above. As is wellknown, Avogadro turned the tables on the atomic theory by a fundamental change in order to cope with this problem.
105
with C implying C and F implying F', then G implies G' and every negative
instance of G' is a negative instance of G. When G' becomes established one
may call it a general fact derivable from G, and hence a general success of X
if G is a GTI of X.
As soon as, and as long as, all negative individual facts of a theory can be
summarized in negative general facts, the individual problems in the evaluation
report can be replaced by the corresponding general problems. In this way we
get on both sides of the record the same kind of conceptual entities, viz., general
facts, forming the ingredients of the macromodel of HDevaluation of theories.
Hence, the foregoing exposition concerning general facts amounts to an additional illustration of the fact that general hypotheses and the like can play an
important intermediate role in the evaluation of theories. Skipping the intermediate notions of general successes and general problems would hide the fact
that they make the evaluation of theories very efficient, in theory and practice.
Instead of confronting theories with all previously or subsequently established
combined individual facts, it is possible to restrict the confrontation as much
as possible to a confrontation with old or new summarizing general facts.
5.2.2. Statistical test implications
The presentation thus far may have suggested that the analysis only applies to
hypotheses and their test implications as far as they are of a deterministic and
noncomparative nature. In this subsection we will present the adapted main
lines for statistical test implications, first of a noncomparative and then of a
comparative nature. In the literature concerning statistical testing, one can find
all kinds of variants and details.8 Here it is only necessary to show that
statistical general and individual test implications can essentially be tested in
a similar way to nonstatistical ones.
In the noncomparative form, a typical case is that the theory, e.g., Mendel's
theory entails a probabilistic GTI of the following abstract form: in domain D
the probability of feature F on the condition C satisfies a certain probability
distribution p (e.g., binomial or normal). The sample version, the proper
Statistical GTI, and the corresponding Ie and ITI are then respectively of the
following form:
GTI: for all rx (0 < rx < 1) and for all random samples s of (sufficiently
large, determined by the distribution) size n of individuals from domain
D satisfying condition C there are a and b (0 < a < b < 1) such that the
probability that the ratio of individuals satisfying F is in the region [0, a]
does not exceed rx/ 2 and similar for the region [b, 1
106
from D satisfying C
ITI: the probability that the ratio in s of individuals satisfying F is in
R(a, n) = defeO, a] U [b, I] does not exceed :x
ITInp:
In classical statistics a test of this kind is called a significance test with significance level a (standard values of IX are 0,005 and 0,001) and critical region
R(a, n). Moreover, the abstract GTI is called the null hypothesis and the classical
decision rule prescribes to reject it when ITInp turns out to be false and not to
reject it when ITInp comes true. However, from our perspective it is plausible
to categorize the first test result merely as a negative sample result or a
(statistical) 'countersample' and the second as a positive one. Of course, strictly
speaking, a countersample does not falsify the GTI, let alone the theory.
Moreover, from repeated positive results one may inductively jump to the
conclusion of a general statistical success GTI.
Statistical test implications are frequently of a comparative nature. This
holds in particular when they derive from causal hypotheses. The reason is
that such hypotheses are essentially double hypotheses, one for the case that
the relevant (supposedly) causal factor is present and another for the case that
it is absent. Typical examples are the causal hypotheses governing drug testing.
An adapted standard significance test for a supposedly normally distributed
feature F on condition C in domain D1 and D2, with the same variance,
focusses on the (null) hypothesis that their expectation values are the same. It
is called the 'ttest or Studenttest' (after W.S. Gossett who wrote under the
name 'Student') and the resulting proper Statistical GTI, and corresponding
IC, ITI and ITInp are now respectively of the following form:
GTI(comp): for all a (0 < IX < 1) and for all sufficiently large (random)
samples sl and s2 of size nl and n2 of individuals from domain Dl and
D2, respectively, satisfying condition C there are a and b (0 < a < b < 1)
such that the probability that a certain function (the tstatistic) of the
difference between the respective ratios of individuals satisfying F is in
the region [0, a] does not exceed al2 and similar for the region [b, 1]10
IC(comp): sample s1 and s2 are random samples of (sufficiently large)
sizes n1 and n2 of individuals from Dl and D2, respectively, satisfying C
ITI(comp): the probability that the value of the tstatistic is in R(a, n) =
[0, aJ U [b, 1] does not exceed a
107
Again, if rx is small enough, according to our taste, we may, by a final nondeductive jump, hypothesize the nonprobabilistic ITI:
ITInp(comp):
This can again be described in classical terms of significance level and rejection
conditions. However, in our perspective it is again more plausible to use the
more cautious terminology of positive and negative test results for the null
hypothesis, which are, of course, negative and positive test results for the
primary (e.g., causal) hypothesis at stake.
5.2.3. Factors complicating HDtesting and evaluation
(1) The derivation of a general test implication from the theory is usually
impossible without invoking explicitly or implicitly one or more auxiliary
hypotheses, i.e., hypotheses which do not form a substantial part of the theory
under test, but are nevertheless required to derive the test implication. An
important type of such auxiliary hypotheses are specification hypotheses, that
108
is, hypotheses that specify particular constraints for the values of certain (theoretical or nontheoretical) quantities in the specific kind of cases concerned.
Hence, to avoid falsification, one can challenge an auxiliary hypothesis. A
particular problem of auxiliary hypotheses may be that they are too idealized;
they may need concretization.
(2) The derivation ofthe general test implication presupposes that the logicomathematical claim can convincingly be proven. One may challenge this. A
successful challenge may, of course, lead to tracing implicit auxiliary hypotheses,
i.e., new instances of the first factor, which can be questioned as indicated
under the first point.
(3) Test implications have to be formulated in observation terms. However,
at present, there is almost general agreement that pure observation terms do
not exist. All observation terms are laden with hypotheses and theories. New,
higher order, observation terms are frequently defined on the basis of more
elementary ones and certain additional presuppositions, which provide e.g., the
relevant existence and uniqueness conditions. Such presuppositions form the
bridge between more and less theoryladen observation terms. These presuppositions belong to the socalled background knowledge; they form the underlying
hypotheses and theories that are taken for granted. We call them the observation
presuppositions. The theory to be evaluated should itself not be an observation
presupposition, i.e., the observation terms relative to the theory should not be
laden with that theory itself, only with other ones. The relevant strategy now
is to challenge an observation presupposition. Ultimately, this may bring us
back to the observation language of a layperson who is instructed by the
experimenter to 'materially realize' (Radder 1988) an experiment.
(4) A general test implication specifies initial (test) conditions. They have
actually to be realized in order to conclude that the individual test implication
must be (come) true. One may challenge the claim that these initial conditions
were actually fulfilled. One important reason for repeating an experiment a
number of times is to make it sufficiently sure that these conditions have at
least once been realized. But if another outcome than the one predicted systematically occurs, one may defend the idea that there are structural causes preventing the fulfillment of the intended initial conditions in the way one is trying to
install them.
(5) Whether an outcome is or is not in agreement with the predicted outcome
is usually not a straightforward matter. This is particularly the case when the
observation terms have vague boundaries or are quantitative, or when the
predicted effects and the observation data are of a statistical nature. In all such
cases, the question is whether the actual outcome is approximately equal to
the predicted outcome. To decide this we need a (previously chosen) decision
criterion. In the case of statistical individual test implications, several statistical
decision criteria have been standardized. Although a statistical decision criterion also concerns an approximation question, we propose to reserve the term
approximation decision criterion for the case of vague or quantitative observation concepts. In general, so the strategy goes, a decision criterion mayor may
109
not be adequate in a particular case, i.e., it mayor may not lead to the correct
decision in that case.
(6) Finally, in order to conclude that the theory has acquired a new general
success, the relevant general test implication has first to be established on the
basis of repeated tests. As a rule, this requires an inductive jump or inductive
generalization, which may, of course, always be contested as unjustified. Note
that a similar step is involved in the establishment of a general problem, but
we neglect it here.
To be sure, the localization of these factors need not always be as unambiguous as suggested by the previous exposition, although we claim to have identified the main occurrences. However this may be, the consequence of the first
five factors (auxiliary hypotheses, logicomathematical claims, observation presuppositions, initial conditions, and decision criteria) is that a negative outcome
of a test of a theory only points unambiguously in the direction of falsification
when it may be assumed that the auxiliary hypotheses and the observation
presuppositions are (approximately) true, that the logicomathematical claim
is valid, that the initial conditions were indeed realized and that the used
decision criteria were adequate in the particular case. Hence, a beloved theory
can be protected from threatening falsification by challenging one or more of
these suppositions.
Refined H Dscheme
In the subjoined, refined schematization of the concatenated HDtest arguments
(Scheme 5.3.) the five plus one vulnerable factors or weak spots in the argument
have been made explicit and emphasized by question marks. As in the foregoing
Theory under test: X
Auxiliary Hypotheses: A ?1?
LMC: if X and A then GTI ?2?
____________________________ MP
GTI: for all x in D [if C(x) then F(x)]
Observation presuppositions: CjC*, Fj F* ?3?
__________________________ SEE
GTI*: for all x in D [if C*(x) then F*(x)
Xo in D
Initial conditions: C*(xo) ?4?
___________________ VI
+ MP
ITI*: F*(x o)
Data from repeated tests
_________________ Decision Criteria ?5?
either sooner or later
a counterexample
of GTI*, leading to the
conclusion notGTI*
or only positive
instances of GTI*,
suggestive interference of GTI*
by Inductive Generalization ?6?
110
exposition, the indication in the scheme of the different types of weak spots is
restricted to their main occurrences in the argument. SEE indicates Substitution
of presupposed Empirically Equivalent terms (in particular, C by C* and F by
F*). We contract the application of Universal Instantation (UI) followed by
Modus Ponens (MP).
Neglecting the complicating factors, the lefthand case in Scheme 5.3. at the
bottom results in falsification of GTI * and hence of GTI, leading to falsification
of X. Under the same idealizations, the righthand case results first in (the
implied part ITJ* of) individual successes of GTJ* (and indirectly of GTJ),
and then, after the suggested inductive generalization, in the general success
GTJ* (and hence GTI) of X.
Concluding remarks
If the truth question regarding a certain theory is the guiding question, most
results of this chapter, e.g., the decomposition of the HDmethod, the evaluation
report and the survey of complications, are only interesting as long as the
theory has not been falsified. However, if one is also, or primarily, interested
in the success question the results remain interesting after falsification. In the
next chapter we will show how this kind of separate HDevaluation can be
put to work in comparing the success of theories. Amongst others, this will
explain and even justify nonfalsificationist behavior, including certain kinds
of dogmatic behavior.
6
EMPIRICAL PROGRESS AND PSEUDOSCIENCE
Introduction
In this chapter we will extend the analysis of the previous chapter to the
comparison of theories, giving rise to a definition of empirical progress and a
sophisticated distinction between scientific and pseudoscientific behavior.
In Section 6.1. we will first describe the main line of theory comparison that
forms the natural extension of separate HDevaluation to comparative
HDevaluation. Moreover, we will introduce the rule of success for theory
selection suggested by comparative HDevaluation, leading to an encompassing
evaluation methodology of instrumentalist flavor. This methodology can be
seen as the core method for the assessment of claims to empirical progress.
In Section 6.2. we will compare the evaluation methodology with the three
methods distinguished by Lakatos (1970, 1978): the naive and sophisticated
falsificationist method and the method of research programs, favored by
Lakatos. We will make it plausible that the evaluation methodology resembles
the sophisticated falsificationist methodology the most and that it may well be
more efficient for truth approximation than the naive falsificationist method.
In Section 6.3. we will argue that the, in some way dogmatic, method of
research programs may be a responsible way of truth approximation, as
opposed to pseudoscientific dogmatic behavior.
Introduction
111
112
individual or general successes and problems, leads to an illuminating symmetric evaluation matrix, with corresponding rules of selection.
113
For, whatever happens, X has extra individual problems or Y has extra general
successes.
It should be conceded that it will frequently not be possible to establish the
comparative claim, let alone that one theory is more successful than all its
available alternatives. The reason is that these definitions do not guarantee a
constant linear ordering, but only an evidencedependent partial ordering of
the relevant theories. Of course, one may interpret this as a challenge for
refinements, e.g., by introducing different concepts of 'relatively maximal' successful theories or by a quantitative approach.
114
115
The symmetric models of separate HDevaluation, i.e., the micro and the
macromodel, suggest a somewhat different approach to theory comparison.
Although these approaches do not seem to be in use to the extent of the
asymmetric one, and can only indirectly be related to truth approximation,
they lead to a very illuminating (comparative) evaluation matrix.
Let us first examine in more detail precisely what we want to list in the three
types of evaluation reports corresponding to the three models. From the present
116
117
facts. NotFo is a negative individual fact for X iff there are initial conditions
ct, not necessarily equivalent to Co, such that X and Ct together (and not
separately) imply F o. Note first that this definition automatically implies that
the relevant q is also a negative individual fact for X. Negative individual
facts typically come in pairs, and a new theory should not introduce such
new pairs.
What happens if a new theory is better in the sense that it loses some
individual facts as negative facts. Let notFo and Co be a pair of negative
individual facts for X, and let notFo be not a negative individual fact for Y.
This does not imply that Co is not a negative individual fact for Y, for it might
come into conflict with Y and some other individual fact than notFo. Hence,
although negative individual facts come in pairs, they do not need to lose that
status together with regard to some other theory.
That notFo is not a negative individual fact for Y also does not imply that
it is a positive individual fact for Y in the sense defined above. This suggests
the plausible definition of a neutral individual fact for Y: a fact which is neither
positive nor negative for Y. Note that if F 0 is a positive individual fact for Y,
due to Co, then this information alone suggests that Co is a neutral fact for Y.
But it may well be that other facts than F0 and/or more information about Ys
consequences lead to another status of Co.
The evaluation matrix
Negative
Neutral
Positive
Negative
X
Neutral
B4: 0
B8: +
B9: +
B2: B5:0
B7: +
Positive
Bl:
B3: B6: 0
118
number for increasingly favorable results for Y, symmetry with respect to the
\diagonal, and increasing number for increasingly positive indifferent facts. 6
It is now highly plausible to define the idea that Y is more successful than
X in the light of the available facts as follows: there are no unfavorable facts
and there are some favorable facts, that is, Bl /2/ 3 should be empty, and at
least one of B7/8/9 nonempty. This immediately suggests adapted versions of
the comparative success hypothesis and the rule of success.
It is also clear that we obtain macroversions of the matrix, the notion of
comparative success, the comparative success hypothesis and the rule of success
by replacing individual facts by general facts. A general fact may be a general
success, a general problem or a neutral general fact for a theory. Note that
combinations with individual and general facts are also possible. 7
In all these variants, the situation of being more successful will again be rare.
However, it is certainly not excluded. In Chapter 11 we will see, for instance,
that the theories of the atom developed by Rutherford, Bohr and Sommerfeld
can be ordered in terms of general facts according to the symmetric definition.
Another set of examples of this kind provides Table 6.2. (from: Panofsky and
Phillips 1962 2 , p. 282) below, representing the records in the face of 13 general
experimental facts of the special theory of relativity (STR) and six alternative
electrodynamic theories, viz., three versions of the ether theory and three
emission theories.
Table 6.2. Comparison of experimental record of seven electrodynamic theories
Light propagation experiments
>
!!
.!!
Ul
J:
'"
." .
.'"" ~.. ...." "" f.
.. fi '"
..8e !"
.. ,g ..'"
u.
..
.,"
"0
i
,g "
!
. .. >." .... " ...,
0
<:)
I!
.!!
.."
.
" ,.. ...
. ".. ~
C
~
,g
.
.
". jj "
~.
..,
<
IE
0
Theory
>
0
u
.!!
"
Ether
Emission
theories
J:
theories
'ii
Ii
"
I!
"
'"
=
c
.....
"C
:;;
.!!
0.
g
C
c
,.:;
0
CD
~
>
E
c
c
.!!
~
0
!!'
.lI
.S;
.c
'!'
>
0
E
E
.2
..
!!'
<!
.!!
a:
..'"
ec
>
.c
c;
..
0
..,.!!0 lii
".5E
Z
::I C
c
s .!!C>.
8.'"
_c
"... 5~
e
~
Stationary ether,
no contraction
Stationary ether,
Lorentz contraction
Ether attached to
ponderable bodies
0
0
0
0
Original source
Ballistic
New source
0
0
119
As is easy to check, STR is more successful than any of the other ones, in
fact it is maximally successful as far as the 13 experimental facts are concerned.
Moreover, Lorentz's contraction version of the (stationary) ether theory is more
successful than the contractionless version. Similarly, the ballistic version of
the emission theory is more successful than the other two. However, it also
clear that many combinations lead to divided results. For instance, Lorentz's
theory is more successful in certain respects (e.g., De Sitter's spectroscopic
binaries) than the ballistic theory, but less successful in other respects (e.g., the
KennedyThorndike experiments).
In the present approach it is plausible to define, in general, one type of
divided success as a liberal version of more successfulness. Y is almost more
successful than X if there are, besides some favorable facts and (possibly) some
indifferent facts, some unfavorable facts, but only of the B3type, provided
there are (favorable) B8 or B9facts or the number of B3facts is (much) smaller
than that of their antipodes, that is, B7facts. The provision clause guarantees
that it remains an asymmetric relation. Crucial is the special treatment of
B3facts. They correspond to what is called Kuhnloss: the new theory seems
no longer to retain a success of the old one. The idea behind the suggested
relatively undramatic nature is that further investigation may show that a
B3fact turns out to be a success after all, perhaps by adding some additional
(nonproblematic) hypothesis. In this case it becomes an (indifferent) B6fact.
Hence, the presence of B3facts is first of all an invitation to further research.
If this is without success, such a B3fact becomes a case of recognized Kuhnloss. To be sure, when it concerns a general fact of a nomic nature it is more
impressive than when it concerns some general or individual fact that may be
conceived as 'accidental'. Unfortunately, Table 6.2. does not contain an example
of an almost more successful theory.
Cases of divided success may also be approached by some
(quasi)quantitative weighing offacts. Something like the following quantitative
evaluation matrix (Table 6.3) is directly suggested by the same considerations
that governed the number ordering of the boxes.
It is easy to calculate that all qualitative (i.e., Table 6.1. induced) success
orderings of electrodynamic theories to which Table 6.2. gave rise, remain on
the basis of Table 6.3. in tact (which is not automatically the case). Moreover,
we then get of course a linear ordering, with Lorentz's theory on the second
position after STR far ahead of the other alternatives. Of course, one may
Table 6.3. The quantitative (comparative) evaluation matrix
X
Negative
Neutral
Positive
Negative
Neutral
Positive
B4: 1/ 1
B8: + 3/ 3
B9: +4/ 4
B2: 3/ +3
B5: 0/0
B7: + 2/ 2
BI : 4/ +4
B3: 2/ +2
B6: +1 / + 1
120
further refine such orderings by assigning different basic weights to the different
facts, to be multiplied by the relative weights specified in the matrix of Table 6.3.
Note that the qualitative and the quantitative versions of the evaluation
matrix can be seen as explications of some core aspects of Laudan's (1977)
problemsolving model of scientific progress, at least as far as empirical problems and their solutions are concerned.
Let us briefly consider the possible role of simplicity. In (Kuipers 1993, SiS)
we show that the examples of theory comparison presented by Thagard (1992)
to evaluate the computer program ECHO can well be treated by the comparative evaluation matrix (CEM), that is, ECHO and CEM are equally successful
with respect to these examples. However, ECHO is in many respects more
complicated than CEM. One respect is the fact that CEM only uses success
considerations, whereas ECHO applies simultaneously success and simplicity
considerations, even in such a way that, in theory, success may be sacrificed to
simplicity. Of course, as long as two theories are equally successful, we may
add as a supplement to the rule of success, that simplicity considerations
provide good, pragmatic, reasons to prefer the more simple ones, e.g., on the
metalevel CEM is to be preferable to ECHO. 9
This applies, by definition, always in case of observationally equivalent
theories. Similar to the case of acceptance (Section 2.3.), simplicity considerations and, more generally, our background beliefs, may determine which of
two (perhaps already falsified) theories is more plausible (in the sense that it
is closer to the theoretical truth, the theory realist will add). Background beliefs
may determine in this way our preference among observationally equivalent
(falsified) theories; of course, these beliefs do no longer serve this purpose if
they diagnose the two theories as equally plausible.
Be this as it may, returning to just momentarily equally successful theories,
we do not see good reasons to apply simplicity considerations as long as
success criteria lead to an ordering of two theories. Lack of good reasons seems
to hold at least for deterministic theories. In particular, in the case of such
theories there does not seem to be any link between simplicity and truth
approximation. However, from recent publications (see Sober 1998) for an
overview) it seems to be a link between simplicity and (observational) truth
approximation. If so, simplicity may well be counted as a kind of success.
6.2. EVALUATION AND FALSIFICATION IN THE LIGHT OF
TRUTH APPROXIMATION
Introduction
Whereas the method of HDtesting and HDevaluation, and hence the evaluation methodology, have a falsificationist flavor, each with its own aim, they
are certainly not naive in the sense in which Popper's methodology has sometimes been construed. Naive falsificationism in the sense described by Lakatos
121
122
The term 'improvement' may be somewhat misleading, since the more successful
theory may be either an improvement of an older theory, e.g., within the same
research program, or really a new theory.
On the falsificationist side, rRS and RE both only presuppose the application
of all mentioned principles restricted to not yet falsified theories, for as soon
as we have obtained in this way a (convincingly) falsified theory, it is put out
of the game by RE. In other words, the falsificationist methodology, governed
by rRS&RE, presupposes the restricted application of PSE, indicated by rPSE,
and the restricted version of PI, indicated by rPI. If one does not yet dispose
of an unfalsified theory, to apply rPSE, one has to invent one.
It is also important to note that the application of RE and of PSE, whether
the latter is restricted or not, presupposes that the relevant theory is testable,
or falsifiable or confirmable:
Principle of Testability (PT)
Aim at theories that can be tested, and hence evaluated, in the sense
123
that test implications can be derived which can be tested for their truthvalue by way of observation
Hence, the relativization of the methodological role of falsification, inherent in
the evaluation methodology, should not be construed as a plea to drop falsifiability as a criterion for being an empirical theory. On the contrary, empirical
theories are supposed to be able to score successes, to be precise, general
successes. This evidently presupposes falsifiability. However, we prefer the
neutral term 'testability'to, for falsifiability and confirmability obviously are
two sides of the same coin. This observation is seriously in conflict with
Popper's critique on aiming at confirmation. Note also that Popper's plea to
give priority to testing the most unexpected test implications can equally well
be conceived as aiming at as high confirmation as possible, for surviving such
tests gives, of course, the highest success value.
In sum, the evaluation methodology, governed by RS, can now be summarized by PI, presupposing PSE and PT, whereas the falsificationist methodology, governed by rRS&RE, amounts to rPI&RE, both presupposing rPSE
and PT.
Though Popper primarily promoted that theories should be falsifiable, i.e.,
testable (PT), and that they should be tested, i.e., evaluated in our sense (PSE)
and improved in our sense (PI), one frequently suggests that he promoted the
more inclusive falsification principle, including RE, and hence restricted PSE
and PI to rPSE and rPI. However, the number of times that Popper seems to
plea for RE, or for the combination rPI&RE, is negligible compared to the
number of times that he pleads for the unrestricted PI. In this respect, it is
important to stress that Popper uses the expression '(principle or rule of)
elimination', almost always in the sense of 'elimination of error', and this is
precisely what PI amounts to. Hence, as Lakatos has suggested, Popper is
most of the time a sophisticated falsificationist. However this may be, it is at
least evident that rPI&RE, due to RE, does not use all opportunities for
empirical progress, whereas PI does. Hence, RE is not useful for empirical
progress in the sense of RS (and, as we will see, not for truth approximation
either). The only justification of RE that remains is a pragmatic one. If one
wants to use a theory to design a certain product or process and if it is
important in that context to avoid risks, it is plausible to apply RE when it is
possible.
But besides retarding empirical progress in the sense suggested, it is also
plausible to think that RE affects the prospects for truth approximation. A
striking feature of PI in this respect is that the question of whether the more
successful theory is false or not does not playa role at all. That is, the more
successful theory may well be false, provided all its counterexamples are also
counterexamples of the old theory. In the next chapter it will be proven that
RS, and hence PI, are not only functional (to say 'effective' would have here
too strong connotations) for approaching the truth in a precise sense, whatever
124
the truth precisely is, but that they are even efficient in doing so. On the other
hand, it will be shown that rRS&RE, and hence rPI&RE, is also functional for
truth approximation, due to rRS, but very inefficient, due to PE. The reason
is that RE prescribes that when a theory encounters a counterexample one
always has to look for a new theory that is compatible with the data thus far.
A short cut to the truth of a theory with many (types of) counterexamples,
via theories with fewer ones, is excluded. To be sure, the falsificationist methodology, including the comparative part, is functional and efficient in searching
for an answer to the leading question of testing a theory, viz., its truthvalue.
To put it somewhat more generally and dramatically, first, something like
the cunning of reason is operative in scientific research: the evaluation and the
falsificationist methodologies, though no guarantee for truth approximation,
are both functional for truth approximation in a weaker but precise sense.
Hence, realists may claim that they approach the truth by using the falsificationist method, at least as a rule. However, more surprisingly, if one applies the
evaluation methodology, one comes, as a rule, closer to the truth, whether one
likes it or not.
Last but not least, the irony is that the cunning of reason works more
efficiently when the evaluation methodology is applied than when the falsificationist methodology is applied. The reason is that the falsificationist allows
himself, as it were, to be distracted by something which turns out to be
irrelevant for approaching the truth, viz., that the theory is false.
The proof of these claims starts from the asymmetric model of HDevaluation
and is based on the structuralist theory of truthlikeness and plausible interpretations of individual problems and general successes. The main theorem,
called the success theorem, to be presented in the next chapter, states that
being the closest to the truth among the available theories guarantees that it
will be the most successful theory. To extend the proof to the two symmetric
models merely requires some transformation of individual successes and general
problems. By way of an easy lemma, it follows that the result of a crucial
experiment is always functional , though again, is no guarantee, for truth
approximation.
.c:
.....
125
tautology
0)
Q)
.....en
L..
0)
en
co
Q)
L..
o
C
the truth
false
contrad iction
Naturally, this also applies to the set of statements following from the truth
(recall, the strongest true statement), which is, of course, precisely the set of
true statements. This is the case when the set of true statements are taken in
isolation. However, the closed diamond of true statements in Figure 6.1. only
arises after sufficient reshuffling of the true and false statements of the same
strength (i.e., horizontally). But the points that will be made do not depend on
this graphic simplification.
The following possibilities are easily read off from Figure 6.1, which may be
said to represent the landscape of the truth. A false theory may well be (much)
closer to the truth than a true one. And, although the question of whether a
theory is true or false is relevant for the question of whether the theory coincides
with the truth, the first question is irrelevant for its distance to the truth as
long as the theory in question does not exactly coincide with the truth. Finally,
it follows immediately from the landscape of the truth that it is possible to use
a theory development strategy, such as idealization and concretization, that
leads via a whole chain of false theories to the truth.
It is clear that all these possibilities exist due to the plausible explication of
'the truth' as the strongest true statement. However, it is also important to
126
note that the three indicated possibilities do not presuppose that the truth can
be recognized as such, nor that the truth is verifiable, let alone that it can be
established with certainty. What is needed is only that the truth gives recognizable signals, without making their source derivable from them.
These remarks are easy to combine with a literally geographical analogy for
truth approximation in general and the possibility of the irony of the cunning
of reason in particular. To find in the Netherlands the most southeast spot at
the same level as N.A.P. (Normaal Amsterdam Peil(=level, assuming some
very plausible arrangements, there must be precisely one such spot; there is no
reason to try with spasmodic efforts to start and remain in areas not below
N.A.P ..
Introduction
127
128
129
130
about a hard core in this case, it is the general idea that the core vocabulary
generates, by itself or in extended versions, an interesting restriction of what is
possible in reality. This suggests that there is, between the general PI and the
special version PIRP, a broader special version in between, dealing with dogmatically sticking to 'core vocabularies':
Principle of improvement guided by core vocabularies (PICV)
One should primarily aim at progress within a core vocabulary, i.e., aim
at a better theory while keeping the core vocabulary in tact; if, and only
if, this does not work, look for another vocabulary, which mayor may
not be very different
Again, PICV is so formulated that it includes something like a principle of
improvement of vocabularies. Moreover, since PICV is a special version of PI
it is also functional for truth approximation. PICV has been very successful in
the history of science, in particular where descriptive research is concerned.
The history of 'descriptive (or inductive, see below) thermodynamics', dealing
with the search for relations between volume, pressure and temperature, and
the history of 'descriptive chemistry', dealing with chemical reactions, provide
cases in point.
6.3.3. Pseudoscience
From the foregoing it follows that dogmatically dealing with theories has to
be qualified as unscientific when one apparently does not aim at applying PIRP
or PICV. Moreover, it is plausible to characterize pseudoscience as the combination of scientific pretensions and the neglect of PI, in particular its dogmatic
versions PIRP and PICV. This characterization can be seen as an, in some
respects, improved version of that in the introduction of Lakatos (1970, 1978).
The standard examples of pseudoscience, such as astrology, graphology, homeopathy, parapsychology, creationism, and ufology, satisfy these conditions. 13
In all these cases, it is not only quite clear that central dogmas are the point
of departure of unscientific research and application, but it is also rather easy
to indicate how they could become the point of departure of serious research.
Such research, however, is seldom started.
We do not, of course, claim that within the sphere of academic research
pseudoscientific behavior does not occur, but it takes place less than outside
that sphere. Marxist economics, psychoanalytic psychology, and evolutionary
biology increasingly seem to follow the general rules and principles of scientific
research. However, claims of this kind are highly controversial, as the works
of Biaug (1980) and Grunbaum (1984) illustrate for marxist economics and
psychoanalytic theory, respectively.
An interesting question concerns how theology and philosophy should be
evaluated in this respect. Socalled systematic theology certainly has scientific
pretensions, but usually no empirical ones. Nevertheless, it has, directly or
131
132
that the truth or falsity of a theory is the sole interest. Our analysis of the
HDmethod makes it clear that it would be much more adequate to speak of
the Context of Evaluation. The term 'evaluation' would refer, in the first place,
to the separate and comparative HDevaluation of theories in terms of successes
and problems. As we have indicated, it may even refer to the further evaluation
of their relative merits in approaching the truth, or at least the observational
truth.
As a consequence, the foregoing may not only be interpreted as a direct plea
for restricting the test methodology to cases where our only interest is the truth
question. As soon as we are interested in the success question, then the evaluation methodology is more adequate. That methodology can, moreover, be
justified, surprisingly enough, in terms of truth approximation.
Let us, finally, pay some more attention to the pro's and con's of our strict
comparative approach. As has been stressed already, there are few theories
that can be ordered in terms of 'more successfulness' according to our strict
definition. The same holds for our basic and refined orderings in terms of 'more
truthlikeness' in the following chapters. Hence, the limited applicability of our
comparative notions might be seen as a serious shortcoming, supporting a
more liberal comparative or even quantitative approach. To be sure, such
liberalizations are very welcome as far as they are realistic concretizations. We
introduced already the idea of 'almost more successful', which has a plausible
analogue in terms of 'almost more truthlike', which we will, however, not
pursue in further detail. In Chapter 10 we will introduce the refined notions of
'more successfulness' and 'more truthlikeness'. The basic idea behind the (further) refined notion of more successfulness is that one counterexample may
nevertheless be better than another. Although it could be presented already
now, for it is a matter of empirical success evaluation, leading to a refined
version of the rule of success, it is technically more suitable to present it after
the presentation of the refined notion of 'more truthlikeness'. In Chapter 10
we will also question the usefulness of quantitative liberalizations of'successfulness' and 'truthlikeness', mainly because they need realvalued distances
between models, which are very unrealistic in most scientific contexts. Hence,
the applicability of liberal notions may well be laden with arbitrariness. For
this reason, we want to focus on unproblematic cases, guaranteeing that we
get the bottom line of progress and rationality. In Chapter 11 we will deal with
the succession of theories of the atom of Rutherford, Bohr and Sommerfeld
and argue that it is a sequence of increasing success and, potentially, even of
increasing truthlikeness, both in the strict (refined) sense. Hence, although the
strict approach may not have too many examples, it has impressive examples.
Moreover, it is even more important that the strict strategy does not lead to
void or almost void methodological principles. If there is divided success
between theories, the Principle of Improvement amounts, more specifically, to
the recommendation to try to apply the Principle of Dialectics: "Aim at a
success preserving synthesis of the two RSescaping theories", of course, with
133
PART III
This part of the book introduces and analyzes the theory of naive or basic
truth approximation, and its relation to empirical progress and confirmation,
first for epistemologically unstratified theories, later for stratified ones.
In Chapter 7 the qualitative idea of truthlikeness will be introduced, more
specifically the idea that one description can be closer or more similar to the
truth than another, called 'actual truthlikeness', and the idea that one theory
can be closer to the truth than another, called 'nomic truthlikeness'. In the first
case, the truth concerns 'the actual truth', that is the truth about the actual
world, or the actualized possibility, as can be expressed within a given vocabulary. In the second case it concerns 'the nomic truth', i.e., the strongest true
hypothesis, assumed to exist according to the 'nomic postulate', about what
are the physical or nomic possibilities, called 'the nomic world', restricted to a
given domain and, again, as far as can be expressed within a given vocabulary.
Besides indicating some plausible ways of approaching the actual truth, it will
be argued that the evaluation methodology is effective and efficient for nomic
truth approximation. In this and the following chapter, the 'basic' explication
of nomic truthlikeness and truth approximation will be at stake, which, though
appealing to scientific and philosophical common sense, has no reallife scientific examples as yet. A preview will be given of more realistic bifurcations of
actual and nomic truth approximation to be elaborated in later chapters.
Chapter 7 closes with explicating the role of novel facts, crucial experiments,
inference to the best explanation, and the idea of inductive research programs
in the light of truth approximation.
Chapter 8 begins by arguing that ' basic' nomic truthlikeness and the corresponding methodology have plausible conceptual foundations, of which the
dual foundation will be the most appealing to scientific common sense: 'more
truth like' amounts to 'more true consequences and more correct models',
whereas 'more successful' amounts to 'more established true consequences, i.e.,
successes, and fewer established incorrect models, i.e., counterexamples'. The
theme of conceptual foundations will recur in later chapters, and the adapted
dual foundation will remain superior to the two uniform foundations, i.e., one
solely in the terms of models and the other solely in terms of consequences.
Next, it will be argued that actual and basic nomic truthlikeness suggest a nonstandard, viz., intralevel instead of interlevel, explication of the main intuitions
governing the socalled correspondence theory of truth. Finally, it will be made
137
138
7
TRUTHLIKENESS AND TRUTH APPROXIMATION
Introduction
When Karl Popper published in 1963 in Chapter 10 of Conjectures and
Refutations his definition of 'closer to the truth' this was an important intellectual event, but not a shocking one. Everybody could react by saying that the
definition was as it should be, and even that it could have been expected. For
plausible the definition was indeed: a theory is closer to the truth than another
if the true consequences of the first include those of the second and the false
consequences of the second include those of the first.
About ten years later the event of 1963 became shocking with retrospective
effect when David Miller (1974) and Pavel Tichy (1974) independently proved
that a false theory, in the sense of a theory with at least one false consequence,
could according to Popper's definition never be closer to the truth than
another one.
With this proof they demolished the definition, for it could not do justice to
the presupposed nature of most of the culminating points in the history of
science. New theories, such as Einstein's theory, though presumably false, are
more successful than their predecessors, such as Newton's theory, just because
they are closer to the truth. In other words, the greater success of new theories
cannot be explained simply in terms of the truth of new theories and the falsity
of their predecessors, but should be explained in terms of their decreasing
distance from the truth: that a sequence of presumably false theories is converging to the truth should explain their increasing success. Of course, 'the truth'
should then be interpreted as the unknown strongest true hypothesis about
the relevant domain of phenomena within a certain vocabulary.
Miller and Tichy unchained with their proof in the beginning beside signs
of deception mainly skeptical remarks like "only the intuitive idea is important,
fortunately", "it shows that you can't solve philosophical problems by formal
means" and last but not least "it is the punishment for freely speaking about
the truth" 1.
However, after some time, Miller, Tichy and other philosophers recovered
from the fright and started to develop alternative definitions of 'closer to the
truth'. Today there can be distinguished at least five approaches to verisimilitude
or truthlikeness, as the subject is called more and more. This name refers to
the fact that the distance of a theory to the true theory gradually became
139
140
reinterpreted as the similarity of the theory with the true theory. The core of
the problem is the explication of the similarity of theories. The concept of
converging to the truth then follows almost automatically.
Four approaches are represented by the following authors and publications:
Niiniluoto (1987a/ b), Oddie (1986, 1987a/ b), Schurz and Weingartner (1987),
Brink and Heidema (1987). The fifth approach, called the structuralist
approach, was first published in its naive or basic form in 1982 (Kuipers 1982b).
It was independently introduced by Miller (1978) for a very special case. The
structuralist definition is here presented in its full basic, stratified and refined
forms (this and the following chapters), leaning heavily on (Kuipers 1982b,
1984a, 1987b, 1992a/b). (Kuipers 1987a) contains a parade of the approaches,
except that of Brink and Heidema. Niiniluoto (1987a/ b) and Oddie (1986)
present versions of the socalled similarity approach. For recent, detailed, and
comparative, studies, see (Kieseppa 1996) and (Zwart 1998).
All approaches have in common that they succeed in avoiding the problem
of Popper's definition. Hence, a sequence of false theories converging to the
truth is technically possible. This enables the formulation of the claim that
historical sequences of theories constitute, as a rule, sequences of theories
converging to the truth. However, this does not yet imply that all these
approaches can give a clear justification for the already suggested basic intuition
that the increasing success of successive theories can be explained in terms of
increasing truthlikeness of them. Justification of this intuition may be called
the challenge of Larry Laudan (1981). According to Laudan, realists usually
assume this connection, but they should prove it. Two of the five approaches
explicitly claim to do so: Niiniluoto's 'quantitative likeness' approach and our
structuralist approach.
Like Popper's point of departure, all approaches refrain from a purely
metaphysical conception of 'the truth' and agree upon some kind of moderate
metaphysical realism. It is assumed that 'the truth' about a certain part or
aspect of reality is on the one hand determined by the previously chosen
vocabulary, and within these conceptual boundaries it is determined by the
nature of reality itself. The vocabulary determines what aspects of reality can
be expressed. It is evident that there are for each domain several vocabularies,
leading to equally many truths about that domain. In contrast, extreme metaphysical realism may be supposed to claim that reality carries with it its ideal
vocabulary.
Moderate realism does not need to degenerate into conceptual relativism, as
long as it is assumed that there may be all kinds of overlap, connections and
constraints between vocabularies and that such vocabularies may be put
together into more encompassing vocabularies. In other words, fundamental
incommensurability between paradigms, research programs, conceptual frameworks and the like, need not be assumed as the rule, and it is even possible to
reject it on formal grounds (as will be argued in Subsection 9.3.3.). Moreover,
even if fundamental incommensurability is sometimes unavoidable, this does
not yet exclude practical commensurability.
141
This form of realism is highly defensible for the natural sciences: it implies
that theories are true or false and leaves room for the possibility that theoretical
terms pretend to refer to something in reality. However, this type of realism is
difficult to defend for the social sciences, because the conceptual representation
of social reality as well as social reality itself are both human constructions.
Although there may be assumed to be a humanindependent natural world,
speaking of a humanindependent social world is a contradiction.
The approaches differ technically. In the first place, many varieties of using
syntactic and semantic means occur. It is only the structuralist approach that
refrains, by definition, from any explicit role of (the sentences of the relevant)
language (although it can be reconstructed in modeltheoretic terms, see
Section 8.1.). In the second place, one may focus immediately on a quantitative
notion of truth likeness, as e.g., Niiniluoto, or claim that the problem of qualitative or comparative truthlikeness should have priority, as is for instance plausible in the structuralist approach. Strictly speaking, any comparative approach
is concerned with the notion of 'more' or 'increasing truthlikeness', but we will
frequently just talk about 'truthlikeness' when misunderstandings are excluded.
Another difference is of a more fundamental nature. One may start from the
idea that there is essentially one type of (conceptually relative) truth: the truth
about the actual world, and hence one problem of truthlikeness. On the other
hand, one may claim, as in the structuralist approach, that there are essentially
two problems of truthlikeness. The first is engaged with explicating the idea
that one description is more similar to the true description than another
description, the problem of actual truthlikeness; the second is engaged with
explicating the idea that one theory is more similar to the true theory about
what is possible in reality, i.e., what is nomically possible, than another theory,
the problem of nomic truthlikeness 2 .
142
Introduction
In this section we introduce some structuralist notions and then specify the
definition of 'actual truthlikeness' for structure descriptions, in particular for
those of a propositional nature.
7.1.1. Actual truthlikeness
143
pO
(~rl
'!
p1
"'p2
,,",
p4
p3
Let Pi for 1::;; i::;; 4 indicate that switch i is on (++) and , Pi that it is off (t).
Let Po( , Po) indicate that the bulb lights (does not light). It is assumed that
the bulb is not defective and that there is enough voltage. A possible state of
the circuit can be represented by a conjunction of negated and unnegated p;'s.
It is clear that there is just one true description of the actual world, i.e., state,
of the circuit as it is depicted, PO&Pl & , P2 &P3 &P4, according to the standard
propositional representation. Hence, the example nicely illustrates, among
others, that we consider 'the actual world' primarily as something partial and
local, i.e., one or more aspects of a small part of the actual universe. However,
it need not be restricted to a momentary state, it may also concern an actual
trajectory of states. In sum, the actual world is the actual world in a certain
context.
For the general case, let there be given a domain D of natural phenomena
(states, situations, systems) to be investigated. D is supposed to be circumscribed
by some informal, intensional description and may be called the primitive set
of intended applications. Let there also be given a vocabulary V, designed to
characterize D. V is some kind of settheoretic vocabulary, i.e., a vocabulary
specifiable in settheoretic terms. One type concerns ordered sets of elementary
possibilities represented by socalled elementary propositions, as in the case of
the circuit example (propositional vocabularies). Another type concerns ordered
sets of terms (or components), indicating domainsets, as well as properties,
relations and functions defined on them (first order vocabularies). A vocabulary
V gives rise to a set Mp(V), or simply Mp, of potential models or conceptual
possibilities4 . Mp is also called the conceptual frame, designed for D. It may be
assumed that Mp is, technically speaking, a set of structures of a certain
(similarity) type; below a characterization will be given of first order structures
of the same type. In practice Mp will be the conceptual frame of a research
program for D. Note that V has to contain a proper subvocabulary, the
vocabulary of the domain, that enables fixing D in an unambiguous way, viz.,
as a subset of the set of conceptual possibilities generated by that subvocabulary.
The confrontation of a particular situation or state of affairs or system in D,
144
the actual world so to speak, with Mp, is assumed to generate just one correct
representation or one true description, indicated by t.
The problem of actual truth likeness amounts to the explication of the idea
that an arbitrary description, i.e., an arbitrary conceptual possibility, is (more
similar or) closer to the true description than another one. As suggested by the
example, we will first concentrate on propositional descriptions.
7.1.2. Truthlikeness of propositional descriptions
ytisasubsetofxt
t  y is a subset of t  x
There is also a strict version, requiring that in at least one case the subset is a
proper subset or, equivalently, that sp(y, x,t) does not obtain at the same time.
This strict version is represented in Figure 7.2.
It is easy to check that the conjunction of the two clauses is equivalent to
the claim that the symmetric difference y L1 t between y and t, defined as
(y  t) U (t  y), has to be a (proper) subset of the symmetric difference between
x and t.
145
EP
X

8
'<
3*
_
,/"
6 ,~
~~4 ~"~
r~\
7/~
/
//
,/)
~J
/~
.
' t
Figure 7.2. Actual truthlikeness of propositional descriptions: y is more similar to t than x: sets 1
and 5 empty, sets 3 or 7 nonempty
146
Introduction
The idea of nomic truthlikeness is based on the Nomic Postulate to be presented
first. Following this, an outline is given of the structuralist approach to theories.
147
148
149
strongest law, explains all other laws even in the nonliberal realist sense. In
the following we will use the neutral entailment terminology. Moreover, all
hypotheses and theories are supposed to deal with the same domain. 9
A law of nature is traditionally understood to be a true impossibility statement, e.g., a perpetuum mobile is (nomically) impossible. It is important to
note that a hypothesis X in our sense is in fact a domainrelative version of
such a, potentially true, impossibility claim, viz., phenomena that, given Mp,
would have to be represented by a member of Mp  X are claimed to be nomic
impossibilities. In case hypothesis X is true, i.e., when we may speak of law X,
this claim is true. Note further that, when hypothesis X, whether true or false,
fails to recognize a nomic impossibility x as such, i.e., when x belongs to X  T,
this means that it fails to entail the law to the effect that x and similar
conceptual possibilities are nomic impossibilities.
It is clear that the notions of hypothesis and theory are here used in a specific
sense, not related to theoretical terms. Here, both a hypothesis and a theory
mayor may not have theoretical terms of itself, i.e., they may both be proper
or merely observational hypotheses and theories in the sense of using or not
using theoretical terms. In Chapter 9 we will consider stratified hypotheses and
theories. They are designed for observational hypotheses and theories on the
observational level and proper hypotheses and theories using the theoretical
level. The present chapter essentially treats all nonlogicomathematical terms
as if they were observational or, alternatively, as if we are in an omniscient
position as far as the application of terms is concerned. The crucial difference
between a hypothesis and a theory in the sense used here is that the claim of
hypothesis X is just one of the two claims of the corresponding theory X.
The 'problem state' of theory X is depicted in Figure 7.3. Theory X can
make two kinds of mistakes. The members of T  X, if any, are called (T)internal
Mp(V)
TnX
Figure 7.3. The problem state of X: its matches Tn X (internal) and Tn X (external), and mistakes
TX and X  T
150
mistakes: nomic possibilities that are excluded by X; in other words, they are
the realizable counterexamples, or wrongly missing models of X . On the other
hand, the members of X  T, if any, may be called (T)external mistakes of X:
nomic impossibilities that are not excluded by X, that is, wrongly admitted
models, also called mistaken models. Note that the external mistakes form, by
definition, kinds of counterexamples that cannot be realized. The set of all
mistakes of theory X is the union of these two sets T  X and X  T, which is
technically called the symmetric difference between X and T, indicated by X L'I T.
A theory not only makes mistakes, but also makes matches. Tn X represents
the (T)internal matches: nomic possibilities that are recognized as such by X
or, in other words, they are the realizable examples of X or the rightly admitted
or correct models of X. Let X indicate the complement of X with respect to
Mp, i.e., Mp  X. The members of Tn X are the (T)external matches: nomic
impossibilities that are rightly excluded by X. The external matches are kinds
of examples that cannot be realized. The union of the two sets of matches of
theory X is of course equal to the complement of the total set of mistakes, viz.,
Mp  (T L'I X).
Note that all mistakes and matches are ultimate in the sense that they need
not have been established. Established mistakes and matches will later come
into the picture.
Note that the terminology of (T)internal and (T)external matches and
mistakes is not laden with the Nomic Postulate. It leaves room for another
type of target set T.
Table 7.1. presents the matches and mistakes of a theory in a matrix.
Table 7.1. The matrix of matches and mistakes of theory X
Internal
External
Total (union)
Mistakes
Matches
Total (union)
TX
XT
T t1 X
TnX
Mp(T t1 X)
Mp
fnx
(Bi)
(Bii)
151
To begin with the second clause (Bii), it says that the internal mistakes of X
include those of Y, or equivalently, that all internal matches (correct models)
of X are internal matches (correct models) of Y. Hence (Bii) can be read as a
claim about all nomic possibilities: for all nomic possibilities x, if x is not a
model of Y then it is not a model of X or, equivalently, if x is a model of X
then it is a model of Y. For this reason (Bii) will be called the (T)internal
clause. The first clause (Bi) states that the external mistakes (mistaken models)
of X include those of Y, or equivalently, that all external matches of X are
external matches of Y. Hence (Bi) can be read as a claim about all nomic
impossibilities, for which reason it is called the (T)external clause. Note that
(Bi) and (Bii) together are equivalent to the claim that the mistakes of Y
(Y L1 T) form a subset of those of X (X L1 T).
By MTL + (X, Y, T) we indicate that Y is more truthlike than X 'in the strict
sense' that the mistakes of Y form a proper subset of those of X, that is, Y is
at least as truthlike as X, but not the reverse. If the reverse holds as well, then
they are 'equally truthlike'. If none is at least as truthlike as the other, they are
'incomparable' (in truthlikeness). Here and later the strong verbal expressions
'closer to' or 'more similar to' will however also be used to refer to the
corresponding weak notion. When the strict notion is meant, it will be explicitly
stated by adding 'in the strict sense'.10 The strict version is depicted in Figure 7.4.
(in which Mp is not explicitly indicated). Note that Figure 7.4., including the
conditions, is formally equivalent to Figure 7.2.
Mp(V)
Figure 7.4. Nomic truth likeness of theories: Y is more similar (close) to T than X: 1 and 5 empty,
and 3 or 7 nonempty
152
Y  X U T = 5 is empty
n T  Y = 1 is empty
The first condition now reveals that Y makes no extra external mistakes, and
the second discloses that Y makes no extra internal mistakes.
It is important to note that improving a theory X in the sense of finding a
theory Y such that MTL + (X, Y, T) is not an easy task, due to the fact that
both components are counteracting. This can be nicely illustrated by considering e.g., just weakening of theory X: if Y is weaker than X (which was
defined as: X S Y), then Y contains at least all nomic possibilities contained
by X, but X misses at least all nomic impossibilities Y misses. Of course,
strengthening a theory leads to the opposite tension.
The external clause (Bi) has a very illuminating equivalent version on the
level of sets of sets, to be called the second level (of sets), as opposed to the
first level of sets. Let Q(X) indicate the set of supersets of X, that is, the set of
subsets of Mp which include X, which might be called the copowerset of XI2.
Q(X) represents the set of hypotheses following from theory X, for, as is easy
to check, all hypothesisclaims associated with members of Q(X) follow from
the theoryclaim of X. Recall that a true hypothesis corresponds to a set
including T, hence to a member of Q(T). Now it is not difficult to prove the
'bridgetheorem' that the external clause (Bi), i.e., Y  Ts X  T, is equivalent
to the condition that all true consequences of theory X are also (true) consequences of theory Y, i.e., formally:
(BiC)
Q(X) n Q(T)
Q(Y) n Q(T)
To prove this, start from (BiC) and note first that Q(X) n Q(T) = Q(X U T).
Hence (BiC) is equivalent to the claim: Q(X U T) is a subset of Q(YU T). This
is on the first level equivalent to the claim: Y U T is a subset of X U T. In its
153
Model level
internal
external
Consequence
level
nonlaw
law
Matches
TX
xnT
internal mistakes
(wrongly missing models)
internal matches
(correct models)
XT
xnf
external mistakes
(mistaken models)
external matches
(rightly missing models)
Q(X)Q(T)
nonlaw mistakes
(false consequences)
Q(X)nQ(T)
nonlaw matches
(false nonconsequences)
Q(T) Q(X)
Q(X)nQ(T)
law mistakes
(true nonconsequences)
law matches
(true consequences)
154
reflexivity:
antisymmetry:
symmetry:
transitivity:
Hence, from left reflexivity, left antisymmetry and left transitivity it follows
that MTL(X, Y, Z) is for fixed Z a partial ordering of theories. As a consequence, a sequence of theories converging to the truth is perfectly possible.
Some other interesting properties are:
centeredness:
centering:
specularity:
concentricity:
context neutrality:
MTL(X,X,X)
if MTL(X, Y, X) then X = Y
if MTL(X, Y, Z) then MTL(X, f, Z)
if X s; Y s; Z then MTL (X, Y, Z) and
MTL(Z, Y, X)
if X, Yand Z are subsets of Mp and Mp is itself
a subset of a larger set of conceptual possibilities
Mp*, then MTL(X, Y, Z) implies MTL*(X, Y, Z)
Introduction
Thus far, we have been studying truth approximation from the ideal perspective
that we know the true theory or true description. Of course, in practice, this
is exactly what we do not know. In this section we will study truth approximation from the perspective of observational success of descriptions and theories,
and methodological rules based on success differences. We start with (propositional) descriptions. Then the idea of problems and successes of theories is
explicated in structuralist terms. This is followed by the Success Theorem,
according to which success dominance of a theory can be explained by the
hypothesis that it is closer to the truth than its competitor(s). This leads to the
conclusion that the evaluation methodology explicated in Part II, governed by
the rule of success, is functional for truth approximation.
155
Let us assume a propositional frame and suppose that we do not know the
(relevant whole) actual truth. At successive stages, an increasing part of the
truth may be assumed to become known, in some way or other, in terms of
which the success and the problems of a certain description will have to be
defined, as well as the comparative judgment that one description is more
successful than another. The latter may guide the choice between descriptions.
Let us also suppose that 'actual descriptions' are not presented in a piecemeal
way, but in total, that is to say, as far as the relevant elementary propositions
are concerned. Think of the descriptions resulting from one or more researchers
or mechanical description devices. To evaluate such descriptions we assume
the idealization of an infallible metaposition.
Let p and n indicate the (mutually exclusive) sets of elementary propositions
of which it has been established at a certain time 13 that their truthvalue is
'true' (positive) or 'false' (negative), respectively. Together p and n are called
the available data at a certain time. In the course of time both sets can of
course only grow, not shrink. Recall that descriptions were represented by the
set of unnegated elementary propositions. Consider now (description) x. The
union of p n x and n n (EP  x) = n  x indicates the set of elementary propositions about which it has been established that x makes correct elementary
claims, called the established matches, and the union of p  x and
n  (EP  x) = n n x indicates the set of elementary propositions about which
it has been established that x makes elementary mistakes, called the established
mistakes. Together they constitute the descriptive success of x .
Now it is plausible to define that (description) y is at least as successful at a
certain time as x if and only if the set of established matches of y includes that
of x, or, equivalently, if and only if the set of established mistakes of y is a
proper subset of that of x. Or, formally, using the latter version, description y
is at least as successful as description x if and only if
(Di)n n n y is a subset of n n x
(Dii)p p  y is a subset of p  x
The strong version for 'more successful' is obtained by requiring at least one
proper subset relation. The strong version is depicted in Figure 7.5., where the
actual truth t is indicated by an interrupted line, to stress that it is unknown.
Notice that the 'success' of a description is completely symmetric with respect
to true and false elementary propositions. When we can establish that such a
proposition is true when it is true, we can also establish that it is false when it
is false, and vice versa. For this reason we call the present notion of success
symmetric. It is interesting to note here already that the primary notion of
success of theories is an asymmetric one, essentially due to the semantic
impossibility of realizing nomic impossibilities.
The plausible methodological rule prescribes, of course, to favor the more
156
EP
n
x
/' t
Figure 7.5. Increasing success of propositional descriptions: y is more successful than x, relative to
data pin: sets 1.1 and 5.1 empty and sets 3.1 or 7.1 nonempty
(most) successful description, if any. That this rule is functional for truth
approximation is easy to see. The hypothesis that the more (most) successful
description is closer to the actual truth can explain its success dominance, and,
in view of the extra success, it is already impossible that the alternative(s) is
(are) closer to the actual truth.
To be sure, all claims presuppose the auxiliary hypothesis that p and n are
correct, which amounts to the assumption that p is a subset of t and n of EP  t.
It should be noted, however, that even if these conditions are satisfied, there
is no guarantee of truth approximation. Additional evidence may destroy the
success dominance, although a complete turn of the tables is impossible.
It will be clear that similar definitions of successfulness of structure description of a nonpropositional nature can be construed, and that, moreover,
refinements are possible to include the success of 'partial' descriptions.
7.3.2. Nomic explication of methodological categories
Recall that as far as theories are concerned, we have dealt up to now with the
logical problem of defining nomic truthlikeness, assuming that T, the set of
nomic possibilities, is at our disposal. In actual scientific practice we don't
know T; it is the target of our theoretical and experimental efforts. Before we
turn our attention to methodological rules guiding these efforts, it is fruitful to
explicate the idea that one theory is more successful than another and to show
that this can be explained by the hypothesis that the first theory is more similar
to the truth than the second.
Recall that in the propositional form of the actual case, the (actual) truth
157
identify established external mistakes and matches, as well as established nonlaw mistakes and matches (see Table 7.2.)
Assuming R(t) and S(t), it is now easy to give explications of the notions of
individual problems and general successes of a theory X at time t we met in
Part II concerning the HDevaluation of theories. The set of individual problems,
158
Established mistakes
Model level
internal
Established matches
RX
xnR
individual problem
individual success
or neutral instance
external
XS
Consequence level
nonlaw
law
xns
Q(X)Q(S)
Q(X)nQ(S)
Q(S)Q(X)
Q(X)nQ(S)
general problem
or neutral law
general success
159
R(t)  Y ; R(t)  X
(=)
(=)
n R(t) ; yn R(t)
This explication of the individual problem clause can be read as ranging over
the established nomic possibilities: for all z in R( t), if z is a model of X then it
is a model of Y. Hence it will be called the established internal success clause
or the instantial (success) clause.
On the other side, for the general success clause we have two options for
explication, one on the first and one on the second level, leading again to two
equivalent comparative statements. To begin with the second level, the level of
consequences, theory Y is (T)externally or explanatorily at least as successful
as X if and only if the general successes, i.e., the established law matches, of X
form a subset of those of Y, that is, X has no extra general successes, i.e.,
established law matches, or, equivalently, the established law mistakes of Y
form a subset of those of X. Formally we get
(BiC)S Q(X)nQ(s(t;Q(Y)nQ(S(t
Q(X)nQ(S(tQ(Y)=4J
(=)
(=)
On the first level, the level of sets, this is equivalent to the condition that the
established external matches of X form a subset of those of Y, that is, X has
no extra established external matches, that is, Y has no extra established
external mistakes, or, equivalently, the established external mistakes of Y form
a subset of those of X. Formally,
(Bi)S
S(t)nX;S(t)n
(=)
YXUS(t)=5.l=4J
(=)
YS(t);X S(t)
The same line of formal reasoning as in the case of the equivalence of (Bi) and
(BiC) leads to the conclusion that (Bi)S and (BiC)S are equivalent. The first
level explication of the general success clause, i.e., (Bi)S, can be read as ranging
over the established nomic impossibilities: for all z in S(t), if z is not a model
of X then it is not a model of Y. Hence it will be called the established external
success clause. Its second level explication, (BiC)S, ranges over established
laws and may hence also be called explanatory (success) clause.
160
The conjunction of the established internal and law clause forms the general
definition of the statement that one theory is at a certain time at least as
successful as another, relative to the data R(t)jS(t). More informally we say
that Y is more successful than X , relative to R(t)jS(t), indicated by
MSF(X, Y, R(t)/S(t)) . We obtain the strict version MSF + (X , Y, R(t)/S(t))
when in at least one of the cases proper subsets are concerned. This strict
version is depicted on the first level in Figure 7.6. (in which T is indicated by
an interrupted circle to stress that it is unknown).
Mp(V)
y
Figure 7.6. Increasing success of theories: Y is more successful than X, relative to data R(tl/S(t):
sets 1.1 and 5.1 empty, sets 3.1 or 7.1 nonempty
Note that Figures 7.5. and 7.6. are formally equivalent, when we compare
n(t) with Mp  S(t) . However, in view of the fundamental methodologically
different nature of these sets, this is only a formal similarity.
Now it is easy to prove the following crucial theorem:
Success Theorem:
If theory Y is at least as similar to the nomic truth T as X and if the
data are correct then Y (always) remains at least as successful as X (i.e.,
if MTL(X, Y, T) and CD(R(t) , S(t) then MSF(X, Y, R(t}/S(t)))
From this theorem immediately follows the corollary that success dominance
of Y over X in the sense that Y is at least as successful as X can be derived
from, and hence explained by, the following hypotheses: the truth approximation
(TA )hypothesis, Y is at least as similar to the nomic truth T as X, and the
auxiliary correct data (CD )hypothesis 16.
All notions in the theorem and the corollary have been explicated, and the
proof is, on the first level, only a matter of elementary settheoretical manipulation, as will be clear from the following presentation of the theorem as an
argument:
(Bi)
YTr;;.XT
(Bii)
161
Tyr;;.TX
(TAhypothesis)
Tr;;. S(t)
R(t) r;;. T
(CDhypothesis)
(Bi)S
Y  S(t)r;;.XS(t)
(Bii)R
R(t)yr;;.R(t)X
(success dominance)
As a rule, a new theory will introduce some new individual problems and/or
will not include all general successes of the former theory. The idea is that the
relative merits can now be explained on the basis of a detailed analysis of the
relative 'position' to the truth. However, for such cases a general theorem is
obviously not possible.
The Success Theorem makes clear that and how empirical progress is possible
within a conceptual frame Mp for a domain D. It is important to note that the
specific TAhypothesis presupposes the Nomic Postulate of the research program that <D, Mp) indeed generates a unique, timeindependent set T of nomic
possibilities. The Nomic Postulate creates, as it were, the possibility that there
may occur theories closer to the truth than others and that if theories are more
successful than others, it may be (but need not be) for that reason. In other
words, although each specific example of empirical progress is explained on
the basis of the corresponding specific TAhypothesis, assuming the
CDhypothesis, the general possibility of empirical progress within a research
program is explained on the basis of the Nomic Postulate associated with
<D, Mp).
Two successive generalizations bring us to the explanation of the success of
the natural sciences in general. First, the Nomic Postulate is true for all possible
conceptual frames Mp with respect to the natural domain D. Second, the Nomic
Postulate is true for all frames for all natural domains. We do not claim that
these generalizations don't have exceptions. If they are true in the majority of
cases, they serve their purpose.
7.3.4. Methodological consequences
Let us return to one particular combination <D, Mp) and the corresponding
Nomic Postulate, and let us spell out some methodological consequences of
the Success Theorem. Recall the Comparative Success Hypothesis (CSH) and
the Rule of Success (RS), introduced in Chapter 6:
CSH:
RS:
162
(theory)realist will only appreciate it for its possible relation to truth approximation, whereas the empiricist and the referential realist will have intermediate
interests.
The Success Theorem shows that RS is functional for approaching the truth
in the following sense. Assuming correct data, the theorem suggests that the
fact that 'Y has so far proven to be more successful than X' may well be the
consequence of the fact that Y is closer to the truth than X. For the theorem
enables the attachment of three conclusions to the fact that Y has so far proven
to be more successful than X; conclusions which are independent of what
exactly 'the nomic truth' is:
 first, it is still possible that Y is closer to the truth than X, a possibility
which, when conceived as hypothesis, the TAhypothesis, according to
the Success Theorem, would explain the greater success,
 second, it is impossible that Y is further from the truth than X (and
hence X closer to the truth than Y), for otherwise, so teaches the Success
Theorem, Y could not be more successful,
 third, it is also possible that Y is neither closer nor further from the
truth than X, in which case, however, another specific explanation has
to be given for the fact that Y has so far proven to be more successful.
Hence we may conclude that, though 'so far proven to be more successful' does
not guarantee that the theory is closer to the truth, it provides good reasons
to make this plausible. And this is increasingly the case, the more the number
and variation of tests of the comparative success hypothesis increase. It is in
this sense that we interpret the claim that RS is functional for truth approximation: the longer the success dominance lasts, the more plausible that this is the
effect of being closer to the truthY We may at least informally summarize the
situation by saying that the truth is an attractor when RS is systematically
applied.
In view of the way in which the evaluation methodology is governed by RS,
this methodology is, in general, functional for truth approximation. We would
like to spell out this claim in more detail. Recall that the separate and comparative HDevaluation of theories was functional for applying RS in the sense that
they precisely provide the ingredients for the application of RS. Recall moreover
that HDtesting of hypotheses is functional for HDevaluation of theories
entailing them. Hence, we get a transitive sequence of functional steps for truth
approximation, depicted in Scheme 7.1.
HDtesting of hypotheses
..... separate HDevaluation of theories
> comparative HDevaluation of theories
> Rule of Success (RS)
> Truth Approximation (TA)
Scheme 7.1. Functional steps for truth approximation
163
164
Forward Theorem
If the vocabulary is observational and if CSH, which speaks of remaining
more successful, is true, this implies the TAhypothesis, that is, if Y is
not closer to the nomic truth than X, (further) testing of CSH will
sooner or later lead to an extra counterexample of Y or to an extra
success of X.
In other words, for observational theories holds that 'so far proven to be more
successful' can only be explained by the TAhypothesis (Success Theorem) or
by assuming that the comparative success hypothesis has not yet been sufficiently tested (Forward Theorem).
Note that it follows from the Success Theorem and the Forward Theorem,
that CSH is even equivalent to the TAhypothesis for observational vocabularies, assuming in addition that all members of T can in fact be realized, and
that T can in fact be established as a true hypothesis. This equivalence directly
follows from the fact that T contains all possibilities that can possibly be
realized and is the strongest (observational!) law that can possibly be
established.
165
166
nomic truthlikeness
comparative
basic
quantitative
refined
trivial
nontrivial
basic
refined
trivial
nontrivial
comparative
quantitative
actual truthlikeness
gives rise to three variants of 'the nomic truth', and hence of nomic truthlikeness
and nomic truth approximation: the observational truth, the theoretical truth,
and, in a sense in between, the substantial truth. Here 'the observational truth'
stands for the strongest true claim that can be made with the observation terms
of the vocabulary about all nomic possibilities, whereas 'the theoretical truth'
amounts to the strongest true claim that can be made with the (observational
and) theoretical terms of the vocabulary about all nomic possibilities. It turns
out to be possible to define, on the basis of the theoretical truth, which
theoretical terms really refer and which do not, to be summarized as 'the
referential truth'. This referential truth gives rise to the third kind of nomic
truth, the one in between, viz., 'the substantial truth', defined as the strongest
true claim in observational and referring theoretical terms about all nomic
possibilities. In Chapter 9 we will present the basic versions of stratified truth
approximation, and in Chapter to, after the refinement of the basic definition
for unstratified theories, we will deal with stratified versions of refined truth
approximation.
The next chapter, viz., Chapter 8, will still be restricted to comparative actual
and basic nomic truthlikeness, and will shed more light on the foundations of
these definitions and their usefulness for explicating the idea of a correspondence of truth and several intuitions governing dialectical reasoning.
In all following chapters we will bring in matters of success and methodology,
whenever relevant. We have already done this in the previous section for some
fundamental aspects of the given definitions. In the final section of this chapter
we add a couple of additional logical and methodological consequences.
7.5. NOVEL FACTS, CRUCIAL EXPERIMENTS , INFERENCE TO
THE BEST EXPLANATION, AND DESCRIPTIVE RESEARCH
PROGRAMS
Introduction
In Section 7.3. we have already drawn the conclusion from the Success Theorem
and the Forward Theorem that RS, and hence the evaluation methodology,
was functional for truth approximation. And we may add that the unrestricted
167
168
C&H* is more successful than C&H. This success difference can be explained
by assuming that C is true as a dogma and that H* on its own, however far
it may be from the truth, is nevertheless closer to the truth than H on its own.
For under these assumptions, in structuralist terms, it follows immediately that
C&H* is also closer to the truth than C&H. Of course, using the Success
Theorem, this consequence explains the success difference.
There remain the diverging point about novel facts and the interesting
question concerning to what extent Lakatos' analysis of crucial experiments
can be upheld. We will first evaluate the emphasis of Popper and Lakatos put
on novel facts from the truth approximation perspective. Then it will be shown
that the Lakatos' relativization of crucial experiments directly follows from
that perspective.
We will close this chapter by considering the ideas of 'inference to the best
explanation' and 'descriptive, in particular, inductive research programs' in the
light of truth approximation.
7.5.1. Novel facts and ad hoc repairs
Suppose that our favorite theory has been falsified. Now it is possible that a wellconceived change of the theory leads to a new theory which is not falsified by the
counterexample of the old. As Popper has stressed, in scientific practice it is
considered to be very important that such a new theory not only avoids the
problem of the old one, in which case it is just an ad hoc repair, but that it also
leads to new test implications which could not be derived from the old theory and
which turn out to pass the corresponding tests. Popper and Lakatos even thought
that this extra success, predicted novel facts, was the litmus test for whether or not
the new theory is possibly closer to the (relevant) truth than the old one.
From our analysis it immediately follows that it is formally possible that a
new theory is closer to the truth than the old one while it only corrects the
individual problems of the old one, without extra general success. Suppose that
X is a subset of T and let I be the general test implication of X (hence X is
subset of 1) which has been falsified. Suppose now that Y is such that Y n I =
X and that Y is, like X, a subset of T. Under these conditions Y is closer to
the truth than X, only by losing known individual problems of Y, without
unintended general success. For under the specified conditions, X must be a
subset of Y, and hence theory X implies all general hypotheses following from
theory Y, hence all general successes of Y.
A similar case is possible for theories X and Y containing T as subset. Let
X fail to imply the established law Land let Y = X n L, then Y is again closer
to the truth than X, with only L as extra general success, and no unintended
loss of individual problems.
These special cases can be summarized as follows: if the theory under evaluation happens to be stronger or weaker than the true theory, the suggested ad
hoc repairs will bring one closer to the truth, without unexpected extra success.
However, if the theory is not simply stronger or weaker than the true theory,
169
then the suggested ad hoc changes of the theory will almost inevitably lead to
new predictions of extra success, some of them coming true or false when the
repaired theory is or is not closer to the truth. It is not even excluded that
there are plausible general conditions under which 'almost inevitably' can be
replaced by 'inevitably'. However this may be, if comparative HDevaluation
of a new theory is in favor of that theory, then, depending on the test route,
this either means an unexpected extra individual problem of the old theory, or
an unexpected extra general success of the new theory, where what is unexpected
is, of course, determined by the old theory. In sum, ad hoc repair of a theory
will seldom be a real improvement without unexpected extra success. In other
words, comparative HDevaluation of an ad hoc repair will either lead to
unexpected extra successes of the new theory or extra successes of the old
theory that could have been, but were not, explicitly expected before. Hence,
besides some qualifications, the intuitions of Popper and Lakatos with respect
to ad hoc repairs and novel facts are largely justified. Instead of a ban on ad
hoc changes, they can be allowed, provided they are subjected to comparative
HDtesting with the original theory.
7.5.2. Crucial experiments
We have seen before that the HDtesting can be applied to the comparative
hypothesis "theory Y is closer to the truth than theory X". The comparative
hypothesis is suggested when one theory is more successful than another. Let us
now look from the truth approximation perspective to the situation that two
theories are equally successful, and hence to the idea of a socalled crucial test.
So let the two theories concerned, say X and Y, be equally successful before
the crucial test. We mayor may not assume, in addition, that both theories
have not yet been falsified. The methodological side of the idea of a crucial
test amounts to the following. First, a crucial test typically is supposed to be
a repeatable experiment, hence it concerns general test implications. More
specifically, the idea is to derive from X a general test implication /(X) of the
form "always when C then F" and from Y J(Y) of the form: "always when C
then notF". Let us further assume that it follows from our background knowledge that one of these general testable conditionals has to hold and that it is
possible to test them with C as initial condition.
Under these conditions it is excluded that the two theories will remain
equally successful, for the experiments will force us to accept either J(X) or
/(Y) . Moreover, if the experiments force us to accept for instance J(Y), this not
only implies that Y is more successful than X as far as general successes are
concerned, but also that Y is more successful than X as far as individual
problems are concerned. The reason is that J(Y), starting as a falsifying general
hypothesis of X in the sense of Popper, has become a falsifying general fact of
X. Every investigated 'Ccase' apparently resulted in notF, making their combination not only in agreement with J(Y)'s prediction, but also making it a
170
falsifying instance of /(X) (and hence of X). /(Y) summarizes and, pace Popper,
inductively generalizes these falsifying instances.
The assumption that the tests can and will have C as initial test condition
is important. If, due to practical constraints, the initial test conditions have to
be notF and F, respectively, the situation is not that asymmetric. For it is not
difficult to check that in that case every successful test result for the one will
merely be a neutral result for the other, and not a falsifying one.
So much for the methodological aspects of a crucial test. What about its
consequences for truth approximation? Of course, at least all conclusions we
have drawn from the comparative statement that one theory is more successful
than another follow: X cannot be closer to the truth than Y, and Y can still be
closer to the truth than X. Moreover, new experiments (related to an old or a
new GTI of X or Y) may destroy the success dominance. This cannot destroy
the conclusion that X is not closer to the truth than Y, but it destroys the
conclusion that Y could still be closer to the truth than X. As a consequence,
a crucial experiment may temporarily lead to better truth approximation
perspectives for one theory compared to the other, but these perspectives may
well become destroyed. To be sure, the reverse perspective cannot arise, except
when the outcome of the crucial experiments are put into question or when
new considerations about auxiliary hypotheses lead to the conclusion that the
supposed falsifying general fact is, in fact, a general success.
For the case that both X and Y had not yet been falsified before the crucial
test, the following noncomparative conclusions can be added to the above
truth approximation conclusions. X is false, and Y may still be true. Moreover,
Y may later become falsified as well, but the conclusion that X is false can
only be withdrawn by reconsidering data or auxiliary hypotheses.
In several respects the present analysis is in accordance with Lakatos' analyses of crucial experiments, in which the temporary character and the revisability
of the conclusions is generally accepted. Our analysis adds to this that the
conclusions can be stated unproblematically in terms of (perspectives on) truth
and truth approximation, and can be generalized to falsified theories. The latter
point is very important as long as the theories under consideration must be
assumed to be 'born refuted', for instance due to unavoidable idealizations.
7.5.3. Inference to the best explanation
171
If a theory has so far proven to be the best one among the available
theories, then (choose it, i.e., apply RS and) conclude, for the time being,
that it is the closest to the nomic truth T of the available theories
IBT does not have the three shortcomings of IBE. It applies to unfalsified as
well as to falsified theories. It couples a comparative conclusion, being the
closest to the truth, to a comparative premise, being the best theory. Last but
not least, it is directly justifiable in terms of truth approximation, viz., by the
Forward Theorem. To be precise, the comparative hypothesis that the best
theory remains the best, a hypothesis that apparently seems to be true, implies
that the best theory is the closest to the truth.
Hence, IBT can be seen as a, for good reasons, severely corrected version of
IBE. Let us summarize the differences. IBE is restricted to the case that the
best theory has not yet been falsified, whereas IBT applies in general in the
case that there is a best theory, already falsified or not. Moreover, IBE is,
unlike IBT, a standard rule of inference in the sense that the conclusion
is supposed to be true when the premises are. That is, according to IBE the
best unfalsified theory is supposed to be true, whereas IBT only infers that the
best theory is the closest to the truth. As a consequence, even if the best theory
is not falsified, IBT does not conclude that this theory is true, it still leaves
perfect room for the possibility that it is false. Finally, whereas it is difficult to
imagine a justification for IBE, IBT has a straightforward justification in terms
of truth approximation.
7.5.4. Descriptive and inductive research programs
The truth approximation analysis also gives the opportunity to consider the
nature of descriptive research programs. A descriptive research program uses
an observational conceptual frame, and may either exclusively aim at one or
172
more true descriptions (as for example in most historiography), or it may also
aim at the true (observational) theory in the following specific way. Aiming at
the true theory is carried out exclusively by establishing observational laws.
Given that this requires (observational) inductive jumps (of the first and the
second kind), it is plausible to call such programs inductive research programs.21
It is easy to see that such programs 'approach the truth by induction'. For the
establishment of observational laws, the microstep of the HDmethod may be
applied, resulting in true descriptions which either falsify the relevant general
observational hypothesis or are partially derivable from it. According to the
basic definition, assuming that accepted observational laws are true, any newly
accepted observational law guarantees a step in the direction of the true theory.
For if S(t) and S(t') indicate at time t and t' later than t, the strongest accepted
law S(t') is closer to T than S(t). Hence, inductive research programs are
relatively safe strategies of truth approximation: as far the inductive jumps
happen to lead to true accepted laws, the approach not only makes truth
approximation plausible, it even guarantees it.
Note that inductive logic of the Hintikka variant ('induction by enumeration
and elimination'), see Chapter 6, in which generalizations are taken into account
up to the moment that they are falsified, is tailored to such inductive programs.
Explication of the nature of explanatory programs can best be postponed to
Chapter 9, which deals with stratified theories.
Concluding remark
At the end of Chapter 6 we argued for replacing the term Context of Justification
by Context of Evaluation, in view of the fact that the evaluation of theories
also deals with falsified theories, i.e., theories that certainly cannot be justified.
In this chapter we have shown that the term Context of Evaluation would also
be more appropriate in the light of the evaluation of truth approximation
claims of theories.
The next chapter will strengthen this claim for we will see that the intuitive
foundations of the presented definitions, as well as related intuitions concerning
the correspondence theory of truth and principles of dialectical reasoning, all
deal equally easily with false and falsified theories as with true and not  yetfalsified theories.
Introduction
In this chapter we present some topics concerning truthlikeness and truth
approximation in terms of intuitions that play, at least in our opinion, an
important role in scientific and philosophical theorizing. In Section 8.1. we will
spell out in modeltheoretical terms that the basic definition of 'more truthlike'
in the nomic sense not only has an intuitively appealing 'model foundation',
but also, at least partially, a conceptually plausible 'consequence foundation'.
Moreover, combining the relevant parts of both leads to a very appealing 'dual
foundation', the more so since the relevant methodological notions, viz., 'more
successful' and its ingredients provided by HDevaluation, can be given a
similar dual foundation. In Section 8.2. we will argue that the definition of
basic truthlikeness can be reinterpreted as an intralevel explication, for the
actual as well as the nomic perspective, of the idea of a correspondence theory
of truth. In contrast to the usual interlevel reading, the intralevel reading
makes straightforward sense of the intuitions that 'true/false' is a matter. of
corresponding or not corresponding to the facts and that 'closer to the truth'
is a matter of better corresponding to the facts. Finally, in Section 8.3. we will
show that the basic definitions of 'more truthlike' and 'more successful' lead to
straightforward explications of several dialectical notions, such as 'dialectical
negation', 'double negation', the triad 'thesis antithesissynthesis', and 'the
absolute'. In fact, logical, methodological and ontological explications of such
notions can be given for the actual as well as for the nomic perspective.
The reader is warned that the second and third section are rather controversial, a fact which will also become clear from the way of presentation. They
might well be skipped for the first reading, since only a few references to it will
be made later.
Introduction
In the previous chapter we introduced, among other things, the basic definition
of nomic truthlikeness and its methodological consequences. In this section we
173
174
study the optimal conceptual foundations of the definition and its consequences.
In fact, it is the start, to be continued in later chapters, of a systematic search
for intuitively appealing conceptual foundations of ways of truth approximation
by the HDmethod in terms of consequences and models of theories. In each
case we first consider the crucial notions 'more truthlike', 'more successful' and
results of HDevaluation, separately. Secondly, we look for a coherent foundation, that is, a foundation that is essentially the same for all three notions.
There are three types of foundations of a definition, viz., a 'model foundation',
purely in terms of models, a 'consequence foundation', purely in terms of
consequences and a mixed or 'dual foundation', partly in terms of models,
partly in terms of consequences.
It will turn out that the dual foundation is the superior one: it can be given
for all distinguished qualitative ways of nomic truth approximation (basic and
refined, and observational, referential and theoretical) and it is in all cases
conceptually plausible and intuitively appealing. The model, as well as the
consequence foundation, turn out to have specific shortcomings, the first with
respect to the methodological notions, the second with respect to the refined
notions. Since stratified and refined truth approximation will be dealt with in
Chapter 9 and 10 these conclusions can only be drawn, by way of summary,
at the end of Chapter 10.
Recall that the basic explication of truthlikeness (and of successfulness)
presuppose a certain fixed vocabulary generating a fixed set of conceptually
possible structures, the conceptual frame. Hence, there is no dubious metaphysics involved in talking about 'the truth'. Like many others, Popper assumes,
moreover, that 'the truth' or 'the true theory' concerns one unique structure,
viz., the structure representing the actual world. However, we shall not adopt
this assumption. Although Popper is formally interested in the problem of
'actual truthlikeness', he seems informally more interested in the problem of
'nomic truthlikeness', to use our terminology. He wants to define the idea that
one theory is closer to the truth than another, and theories, in the natural
sciences at least, deal with what is nomically (im)possible. That is, the truth
does not so much concern the actual state of a system, but the set of nomically
possible states of a system or some other set of intended applications. However
this may be in various particular contexts, in our approach 'the truth' is
technically equated with some target set of structures, whereas some terminology will suggest, though will strictly not presuppose, the nomic interpretation
of that target set. That is, although we will continue to speak of nomic
truthlikeness and nomic truth approximation in this chapter, we will for the
rest mainly use terminology that is not laden with the Nomic Postulate. The
reader is referred to Tables 7.2. and 7.3. of the previous chapter, in which the
terminology to be introduced in this chapter has already been mentioned
between brackets. With this terminology we want to stress that the analysis of
'nonactual' truthlikeness and truth approximation is useful for any target set
T of conceptual possibilities.
175
Although Popper's definition of 'more truthlike', viz., more true and fewer false
consequences (Popper 1963a), was intuitively very appealing, it nevertheless
turned out, as we have stated already, to be rather problematic. However, as
a steppingstone towards the foundations of the basic definition, it is very
instructive to first disentangle Popper's proposal in logical terms.
Initially we shall focus on a first order (vocabulary generating a first order)
language L and, for convenience, even more specifically on finitely axiomatizable theories of that language, and hence on theories that can each be conceived
as the set oflogical consequences of one single sentence. However, the restriction
to finitely axiomatizable theories, including the assumption that the target set
176
e.
PI:
NI:
all good parts of f/! are (good) parts of 1/1, in the sense that, for all
parts of the truth, if f/! has one, 1/1 has it as well
all bad parts of 1/1 are (bad) parts of f/!, in the sense that, for all
nonparts of the truth, if 1/1 has one, f/! has it as well
177
"fJ entails
a": "a is true in every model of fJ" (notation: fJ Fa). Cn(a) indicates
the set of consequences of c(. A true consequence of 'J. is a consequence of a
which is at the same time a consequence of 8, and hence true in all target
structures. Hence, Cn(a) n Cn(8) indicates the set of true consequences (the
truth content) of a. A consequence of a which is not a consequence of 8 is false
in the sense that it is false in some target structure. Hence, Cn(a)  Cn(8)
indicates the set of false consequences (the falsity content).
Consequently, the formal translations of C.PI and C.NI are as follows:
(it
(iit
or
Cn(I/!) ~ Cn(~) U Cn(8)
Recall that, strictly speaking, such a definition concerns the weak notion: "I/!
is at least as close to 8 as ", leading to the strict notion by imposing, in
addition, the requirement that the reverse claim "~ is at least as close to 8 as
I/!" does not hold at the same time.
Almost equally wellknown as Popper's plausible definition of 1963, reproduced above, is the knockdown argument against it that came forward in
1974, and which has already been mentioned in the introduction of the previous
chapter. Miller (1974) and Tichy (1974) independently proved a theorem showing that the definition had a property of which even Popper immediately
conceded that it was highly undesirable. The theorem presupposes the utsassumption or at least the assumption that the truth is complete. It excludes
that a false theory, i.e., a theory having at least one false consequence, can be
closer to the truth than another one in the strict sense. Intuitively, however, it
is desirable that a false theory can be closer to the truth than another one. For
example, even if Einstein's theory is false, it must be possible that it is closer
to the truth than Newton's theory. Hence, in the light of the MillerjTichytheorem one has to conclude that Popper's explication is inadequate.
8.1.2. A related way of introducing the basic definition
178
whereas his second clause (iit is much stronger than the formalization of the
modelinterpretation of the positive intuition PI, viz.:
M.PI
This is particularly interesting because the basic definition will turn out to be
based on M.NI and M.PI.
Let us start with the transformations. The standard logical theorems
"tl E Cn(tl)" (trivial) and "Cn(tl V fJ) = Cn(tl) n Cn(fJ)" (nontrivial) immediately
lead to a translation of Popper's definition in terms of elementary consequence
claims. Popper's first clause (it, the formalization of CPI, is equivalent to:
(jt
tjJl=v8
Mod()
It is easy to see that each of the two disjuncts of (kkt separately imply (kk),
with the consequence that (kkt is (much) stronger than (kk), and hence that
179
the consequenceinterpretation of NI (CNI) is (much) stronger than the modelinterpretation of PI (M.PI). The other way of easily seeing this is based on the
logical transformation of (kk) in terms of sets of consequences, which is, together
with the intermediate logical transformation, given by:
(kk)
(jj)
~/\(}I=I/J
(ii)
Cn(I/J)()Cn(,(})~Cn(~)
Since (iit is equivalent to Cn(I/J) () Cn(}) ~ Cn(~) it is (much) stronger than (ii),
for Cn( , (}) evidently is a proper subset of (}).
In the light of the foregoing we may conclude that the modelinterpretation
M.PI and M.NI of the intuitions PI and NI is not only intuitively appealing,
but may be nondefective. This suggests the basic definition of 'more truthlike'
for finitely axiomatizable theories in terms of models: "I/J is closer to () than iff
(k)
(kk)
(kk)
(j)
I/JI=~ V ()
(jj)
~/\(}I=I/J
(i)
(ii)
8.1.3. Comparison
Table 8.1. depicts the relations between the two (formalized) interpretations of
the two intuitions, and demonstrates that Popper's definition is stronger than
the basic definition.
It is also illuminating to draw figures that represent the two set versions of
both definitions. Figure 8.1. represents Popper's definition. Figure 8.l .a. represents the sets of consequences of ~, I/J and () as subsets of Sent(L) or, more
precisely, of the set of equivalence classes of logically equivalent sentences (i.e.,
Table 8.1. Comparison of Popper's definition with the basic definition as consequence versus model
interpretation of the same intuitions
Popper's definition:
basic definition:
conseq uenceinterpretation
modelinterpretation
PI+C.PI=
NI + C.NI =
=M.NINI
(ii/jj/ kkt ~ (ii/jj/kk) =
4=
M.PI  PI
180
Sent(L)
Cn(
Str(L)
Cn(lp)
Cn(8)
Mod(8)
a: consequences
b: models
propositions in the technical sense) of L. Figure 8.l.b. represents the modelsets of </I, '" and () as subsets of Str(L) or, more precisely, the set of equivalence
classes of isomorphic structures of L. When the definition applies, this means
that certain areas are empty on logical or conceptual grounds. A blackened
area indicates that the area is empty due to (it in Figure 8.l.a. or equivalently
due to (kt in Figure 8.1.b. In Figure 8.l.a. double shading indicates the empty
area due to (iit Its equivalent (kkt in terms of models is indicated in
Figure 8.l.b. by horizontal and vertical shading of the two areas of which at
least one should be empty. Note that this implies that their double shaded
intersection should be empty anyhow.
Figure 8.2. represents the basic definition, using the same conventions.
Sent(L)
Str(L)
Cn(rJ)
><""7f';r;rM~od(11')
Cn(8)
a: consequences
Figure 8.2. Basic definition: (a) consequences, (b) models
Mod(e)
b: models
181
Figure 8.2.a. depicts the consequence version of the basic definition, i.e., (i) and
(ii). Note that all the theories have one point in common, viz., the tautology.
Figure 8.2.b. depicts the model version, i.e., (k) and (kk). Note the plausible
perfect similarity between the Figure 8.2.b and Figure 8.l.a. and the dissimilarity
between Figure 8.2.a. and Figure 8.l.b.
From Figures 8.l.a. and 8.2.a. it is easy to see that (iit is much stronger
than (ii), for the double shaded area in the first is clearly a superset of the one
in the second. Similarly, from Figures 8.l.b. and 8.2.b. it is easy to see that
(kkt is much stronger than (kk), for (kk) only requires that the double shaded
area is empty, whereas (kkt adds to this that at least one of the two areas
shaded only once is also empty.
8.1.4. The knockdown argument and its blockade
It will now be argued that and why the arguments of Miller and Tichy cannot
be used to knock down the weaker, basic definition, based on M.NI and M.PI.
For that purpose it is necessary to set up a decomposed version of the
Miller/Tichytheorem and the proof in terms of consequences, hence in particular Figure 8.l.a. and 8.2.a. may help the reader in following the arguments.
Lemma 1:
If (ii
is empty. Assume that some ex belongs to that set. Given that t/J is false, it
follows from (iit that there must be some /3 in (Cn(</I) n Cn(t/J  Cn(fJ). This
implies that cx&/3 is a false consequence of t/J, but not of </I, which is excluded
by (iit. Hence, we may conclude that t/J is a weakening of 1/1. Qed.
Lemma 2: If (it, fJ is complete and
From the proofs of the two lemmas it is clear that the consequenceinterpretation of NI, i.e., CNI, hence (iit, is crucial for the proof and hence for the
182
Assuming that the basic definition is the adequate one for finitely axiomatizable
theories, it is tempting to claim that it is clearly the modelinterpretation of
the positive and the negative intuition (M.PI and M.NI) which leads to the
adequate definition, and hence provides its proper intuitive foundation.
However, there is another interesting interpretation of the situation. In view of
the equivalence of the formal translations of M.NI and C.PI, the basic definition
may also be seen to be based on the combination of the model and the
consequenceinterpretation of the positive intuition, that is:
C.PI (i)
M.PI (kk)
We call this the dual foundation of the basic definition, roughly: more true
183
184
Y  Tr:;X(or:Yr:;TUX)
X n Tr:; Y (or: X n Tr:; Y)
Q(X):
185
<>
<>
Q(X U T) ~ Q(Y)
Q(Y);Q(XnT)
<>
<>
Note first that the equivalent of (Bi) on the right side corresponds to (BiC) of
the previous chapter. Note also that a member of Q(X U T) is an sconsequence
of X and T, i.e., a true sconsequence of X. Hence, (Bi) roughly states that Y
has more true sconsequences than X. Hence, the structuralist version of the
basic definition can also, in view of the Qreading of (Bi), and the straightforward reading of (Bii), be based on the dual foundation, i.e., the consequenceand the modelinterpretation of the positive intuition:
all true sconsequences of X are (true) sconsequences of Y
and
all correct smodels of X are (correct) smodels of Y
Or, roughly, "Y is closer to the truth than X" iff it has more true sconsequences
models and more correct smodels.
Again there is also a plausible, though nonintuitive, consequence foundation.
As mentioned already, (Bii) is equivalent to
Q(Y) n Q(T)
Q(X) n Q(T)
186
In sum, so far it has been shown, starting from Popper's failing consequencedefinition of 'more truthlike', viz., more true and fewer false consequences, that
a reinterpretation of Popper's underlying intuitions in terms of models leads
to the perfectly viable socalled basic definition, viz., more correct and fewer
mistaken models. It has also been argued that this intuitive 'model foundation'
of the basic definition can be replaced by a 'dual foundation', which is also
intuitively appealing: more true consequences and more correct models.
Moreover, there is, on second analysis, also a plausible 'consequence foundation', which is, however, not intuitively appealing: more true and fewer strongly
false consequences. Finally, similar foundations are possible for the structuralist
version of the basic definition.
8.1.7. Basic methodology
The given foundational analysis invites for extension into at least the following
directions. What foundations can be given for related definitions? ( 1) The basic
definition of 'more successful' in terms of successes and problems, (2) the basic
definition of 'closer to the observational truth' and 'observationally more successful', in the case of a distinction between theoretical and observational
components, (3) the refined definitions of 'closer to the truth' and 'more
successful, and their stratified versions, (4) the quantitative definitions of truthlikeness and success, based on numbers of models, or distances between them.
It may seem plausible that in some of these cases there can also be given
intuitive model and dual foundations and that, at most, a nonintuitive consequence foundation can be provided. In this subsection we will investigate these
possibilities for the first, methodological question. In the next chapters we will
pay attention to the other questions.
Recall that we have shown in detail, in Chapter 6 and 7, that the hypotheticodeductive (HD )method is functional for basic truth approximation when combined with the rule of success (RS). This rule presupposes an explication of the
idea that one theory is more successful than another in a similarly basic way
to the basic definition of greater truthlikeness. The rule prescribes to adopt the
more successful theory. The core of the functionality proof was twofold. First,
it was shown that HDevaluation of separate theories provides the ingredients
for comparing the success of two theories in the basic sense. Second, it was
proved that 'being more truthlike in the basic sense' implies 'being at least as
successful in the basic sense', the Success Theorem.
Now it will be argued that there is also an intuitive dual foundation of 'being
more successful' in the basic sense, and, moreover, that this foundation is
strongly suggested by the method of HDevaluation. Combining the relevant
observations, the ultimate conclusion can be that the evaluation methodology
is a method of basic truth approximation having an intuitive dual foundation
in terms of successes and problems. In contrast to this attractive dual foundation, however, now there does not seem to be a plausible, let alone an intuitively
187
appealing, model foundation. On the other hand, there is a plausible consequence foundation of HDresults and of 'more successful' in the basic sense.
The presentation will be in structuralist terms to ensure that the continuity
between the basic and refined approaches can be as large as possible in
Chapter to.
Let us briefly review the HDmethod, in the sense of separate and comparative HDevaluation, presented in Part II. The core of HDevaluation amounts
to deriving from the theory in question, say X, (general) test implications and
to subsequently testing them. A test implication amounts to a superset I of X
with the claim "T!;; I". Testing this claim leads sooner or later either to a
counterexample of this claim, and hence to an (individual) problem of X, or
to the (revocable) acceptance of (the claim of) I: a (general) success of X.
Although a counterexample implies the falsification of X, this is, according to
the evaluation methodology, no reason to stop the HDevaluation of X.
Systematic HDevaluation of X will lead to a (time relative) evaluation report
of X, consisting of registered problems and successes. These two types of
HDresults directly specify the intuitive dual foundation of the HDresults.
Recall also that a success essentially concerns the idea of a law, when Tis
interpreted as the set of nomic possibilities. For, if T is a subset of I, the latter
says something about all nomic possibilities, viz., that they satisfy the conditions
specified in I. In this subsection some 'lawterminology' will suggest, but not
presuppose, this interpretation.
A success of one theory mayor may not be a success of another theory. The
same holds for problems. Successes and problems provide together the basis
for success comparison of theories. For comparative judgements it is easy to
have a common representation of all registered successes and problems of some
theories, or still more generally, of all established nomic possibilities and all
established laws. Let R(t) indicate the set of established nomic possibilities at
time t and S(t) the intersection of all established laws. Hence S(t) is the strongest
law established, and all its consequences are laws. Assuming that mistakes have
not been made, R(t) is a subset of T and S(t) a superset of T. The evaluation
report of X can now be summarized by R(t)  X, indicating the set of all
problems of X, reinterpreted as 'established wrongly missing models', and by
Q(S(t n Q(X), indicating the set of all successes of X, reinterpreted as 'established laws' or, more neutrally, 'established true consequences'. In sum, R( t)  X
and Q(S(t n Q(X) summarize the results of HDevaluation of theory X and
reflect the intuitively appealing dual foundation of HDresults. These
HDresults can be compared with the results of another theory, which may
lead to theory choice on the basis of the appropriate rule of success.
Before we go to the actual comparison, it is interesting to investigate whether
there are, besides (individual) problems and (general) successes, other plausible
'units of comparison', enabling a uniform model or consequence foundation of
the HDresults.
The model foundation of HDresults requires, besides the idea of an individual problem, a model substitute of a general success of a theory. It is easy to
188
YS(t)~X
(Bii)R
189
XnR(t)S; y
RS doesn't require that Y '(always) remains more successful', for that would
rest on the presupposition that CSH could be completely verified (when true).
Hence we only demand that CSH has been sufficiently confirmed, which is, of
course, a matter of debate, for which reason the choice is always temporary.
In Chapter 7 we have argued in great detail that the basic evaluation
190
Dual
Consequence
basic
intuitive
intuitive
plausible
More successful
basic
available
intuitive
plausible
HDresults
available
intuitive
plausible
Table 8.2. summarizes the foundational conclusions about the basic notions.
Recall that if conceptual foundations of a certain type and purpose are available
at all, they are called plausible when they can be shown to have a conceptually
sound basis. They are called intuitive when they are moreover intuitively
appealing, i.e., seem to express 'scientific common sense'.
Introduction
In this section it will be argued that it is possible to conceive the definition of
actual and nomic truthlikeness as intralevel explications of the correspondence
theory of truth. In contrast to interlevel attempts, this intralevel explication
not only realizes in a transparent way that 'true' and 'false' amount to corresponding to the facts and not corresponding to the facts, respectively, but also
that 'more truthlike' amounts to better corresponding to the facts.
191
For the empirical sciences it is plausible to distinguish one concrete and two
abstract levels. Level C, the object level, is the level of the nomic world of
which the basic units consist of (almost) unconceptualized systems, state of
affairs and events. On level I, the first level of representation, the basic units
are settheoretic structures; they are used to represent members, parts or aspects
of level C. Finally, on level II, the second level of representation, the basic
units are the sentences of formal languages; they can be used to formalize level
I, and they can be interpreted on level I. Scheme 8.1. depicts the three levels.
II
sentences
formalization / interpretation
structures (models)
representation
1
C systems
Scheme 8.1. Levels of representation
192
that Tarski deals in his 1935paper only with 'FregePeano languages', which
areJormalized languages with afixed interpretation for all nonlogical constants,
and not with modern (first order) Jormallanguages, of which it is characteristic
that they have nonlogical uninterpreted constants, for which a model provides
an interpretation.
We would like to add two remarks to Hodges's clarifying differential diagnosis. In line with the type of language, Tarski assumes in his original papers a
context where there is only one world or, more carefully, one model or structure
representing that world, described by the formalized language, of which every
sentence gets a definite truthvalue. This is indeed quite different from the
presentday flexible, and very fruitful, use of SOT for formal languages or, to
use Hodges' way of phrasing, 'truth in a structure' is one of the few important
scientific inventions of an indexical kind.
Surprisingly enough, Hodges does not pay attention to the fact that Tarski
wrongly suggests that his technical definition can be directly generalized to the
empirical sciences. To be precise, Tarski suggests in the informal parts of both
mentioned technical papers, but mostly in the informal version of 1944, that
he is dealing not only with (pure) mathematics, but also with (formalizable)
empirical sciences. However, he only gives the definition in full detail for what
he calls 'the language of the calculus of classes' (with a fixed interpretation for
'inclusion'!). In this context the target is not only one world, but an abstract
one, or at least a nonmaterial one, viz., the universe of classes. Having this
mathematical paradigm in mind might well explain why Tarski considers only
one target world, as well as why he does not make the distinction corresponding
to our distinction between concrete level C and abstract level I. This may be
defensible, even unavoidable, for pure mathematics; it is certainly an avoidable
confusion for the empirical sciences.
Tarski claimed first of all that SOT explicates Aristotle's dictum:
"To say of what is that it is not, or of what is not that it is, is false, while to
say of what is that it is, or of what is not that it is not, is true", or in terms of
a famous instance of Tarski's material condition of adequacy: "The sentence
'snow is white' is true if, and only if, snow is white". This condition will here
be called the condition oj coordination. The claim that SOT realizes this
condition has never been seriously disputed, and it will be clear from our
description of SOT above that we can easily concede this point, although only
as far as coordination between levels I and II is concerned.
For us it is more important that Tarski, though hesitatingly, and later
Carnap, Stegmiiller and Popper, without hesitation, claimed that SOT provides
in addition an (interlevel) explication of the idea of a correspondence theory
of truth. This claim, however, has been seriously disputed. For surveys of
criticism one may consult e.g., O'Connor (1975), Puntel (1978), and Kirkham
(1992). Here we will evaluate the claim in terms of what we consider to be the
two basic correspondence intuitions:
(CI)
193
(C2) "B is more similar (closer) to the truth than A" if and only if
"B corresponds better to 'the facts' than A"
Assuming that A and B are sentences on level II and that 'the facts' are
localized on level I or level C, it is clear that SOT does not explicate intuition
(C2), at least we do not see any point of contact, let alone an indication, in
the direction of (C2). If we accept this, the fact that SOT may seem to explicate
intuition (CI) may well be due to the fact that it explicates and satisfies the
coordination condition. These considerations equally apply to the original
version of SOT.
One might object to this evaluation that the interlevel correspondence claim
associated with SOT has to be restricted to atomic sentences, as the 'snow is
white'example, used as an instance of correspondence, may suggest. However,
in this way the correspondence claim is not only unusually reduced, but the
nature of the correspondence is left even more in the dark.
The natural question now is of course whether there are other candidates
for an interlevel correspondence theory of truth, explicating at least intuitions
(CI) and (C2). If there was a fully elaborated picture theory of truth (between
level II and I or C, or even between I and C), this would be a plausible
candidate. However, neither Wittgenstein's formulation of it, nor a modern
version of it, (Oddie 1987a), are easy to understand, let alone to evaluate with
respect to our question. Hence, let us turn to possible intralevel explications
of (CI) and (C2).
Truthlikeness theories (TLtheories) are primarily designed to explicate the
expression '~ is closer to the truth T than B" in an intralevel way, i.e., A, B
and T are assumed to be of the same level, I or II. TLtheories usually also
have a plausible definition of'~ is true (false)" in terms of T. Hence, it is clear
that TLtheories can, at the same time, be conceived as explications of the
correspondence intuitions (C 1) and (C2) as soon as one is prepared to localize
'the Jacts ' at the same level as A and B, of course by identifying them with 'the
truth'. In this way 'the facts' do not refer to unconceptualized, but to conceptualized facts. This brings us to the main claims of this section.
Claim 1:
194
The specific part of this claim will be illustrated in the next subsection.
8.2.2. Actual and nomic truthlikeness as correspondence theory of truth
195
Structures:
switches:
bulbs:
onswitches:
lighting bulbs:
x = <S, B, C, L)
S = {sl , s2, s3}
B= {bl , b2}
Mp
CS
LB
s2
circuit (hence, according to Figure 8.3., the structure with C = {s2, s3} and L =
{b2}: the true description (representation) or the actual truth. The main
TLquestion is: how to define "description y is closer to t than description x"?
Nomic truth likeness: let T (T Mp) indicate the set of structures representing
the nomically possible states of the circuit (hence, roughly: a bulb lights if and
only if it is in a connected path): the true theory or the nomic truth. That there
exists such a set is guaranteed by the Nomic Postulate. The main TLquestion
is now: how to define "Theory Y, claiming that the set of structures Y is T, is
closer to T than theory X"?
In order to support Claim 3 we can confine ourselves to the comparative
notions of nontrivial actual truthlikeness and basic nomic truthlikeness.
Figure 8.4. and Figure 8.5. illustrate the actual as well as the nomic story. We
start with actual truthlikeness.
To begin with, the actual truth t is identified with 'the (actual) facts'. Let us
further assume that all conceptual possibilities have the same domains D j , but
that the relations R j defined on them may vary. (In Figure 8.4 and 8.5 F(Rj)
indicates the field of Rj, i.e., the Cartesian product of the relevant domains of
Rj). Recall that the claim "x = t" is associated with 'description x'. The following
actualist explications of the correspondence intuitions are now plausible:
(CID)
(C2D)
Figure 8.4. illustrates the false version of (CID), and Figure 8.5. illustrates
(C2D), assuming in both cases that R j is one of the crucial relations. The
figures show clearly that and how '( better) correspondence' is explicated as a
196
Figure 8.4. One or both *areas are nonempty, representing that x does not correspond to
(assuming that R j is crucial), respectively, that X does not correspond to T
F(Rj)/Mp(V)
Rj(y)/y
Figure 8,5. One or both *areas are nonempty, representing that y corresponds better to t than x
(assuming that R j is crucial), respectively, that Y corresponds better to T than X
matter of (more) overlap between sets (representing relations). (That the areas
have equal shape and size has no particular relevance.)
The above explication of 'closer to the actual truth' is said to be nontrivial,
as opposed to the trivial explication, according to which y is only closer to t
than x, if y = t and x # y.
Let us now turn to the nomic case. Now we identify the nomic truth T with
'the (nomic) facts'. It is important to notice once more the crucial difference
between the differentiated theory and all other TLtheories: we do not assume,
in a theorydirected context, that 'the truth' concerns one single structure
(representing the actual possibility/world) or, on the sentential level II, one
complete theory in the standard sense. Recall that the claim "X = T" is associated with 'theory X'. A weak notion of 'true/ false' arises when a theory is called
'true/false' when the 'hypothesisclaim' that T is a subset of X is true/false. A
strong notion arises when we take as criterion that the full claim of the theory,
"X = T", is true/false. In this strong interpretation, the explications become
similar to the actual ones:
(ClT)
(C2T)
197
Note that Figure 8.4. also illustrates the false version of (CIT), and that
Figure 8.5. also illustrates (C2T), again showing that '( better) correspondence'
is a matter of (more) overlap of sets, now sets of structures.
Recall that the above explication of 'closer to the nomic truth' is said to be
basic, and is implicitly based on trivial actual truthlikeness, as opposed to
refined explications, which are based on nontrivial forms of actual truthlikeness. One gets Miller's TLtheory from basic nomic truthlikeness if one assumes,
in addition, that T only contains one structure, representing the actual world.
Coming back to the main topic of this section, the question is: what are we
to think of the presented intralevel explication of the correspondence intuitions?
To start with, we would like to conclude that, if it is a satisfactory explication,
then any notion of which the basic explication can be given in terms of the
settheoretic difference is a kind of intralevel correspondence notion. But should
the general nature of an explication be an objection?
The main question seems to be this. The explication is questionbegging for
it presupposes the statements "t is the true description" and "T is the true
theory", and the interlevel correspondence (possibly pictorial) is hidden in
them. However, our tentative answer is that the meaning of these statements
is only this: t and T are the (hypothetical) results of a faultless application of
the relevant representation conventions, given by (variants of) Tarski's truth
definition. Of course, if one wishes to formalize, and hence to work with two
abstract levels, it is desirable to have the representations on both levels such
that they mutually satisfy the coordination condition, i.e., Tarski's material
condition of adequacy or Aristotle's dictum. But this coordination is only a
matter of easy convention, where theoretical terms do not seem to form an
additional problem, as opposed to claims of (lack of) intralevel correspondence
and better correspondence. The latter claims form the substantial part of the
empirical sciences, where 'the facts' or 'the truth' form the unknown. Of course,
we may not expect to be able to establish such objective truths beyond any
reasonable doubt. We will, at most, be able to find intersubjectively applicable
truth criteria that lead to intersubjective truths, our best candidates for objective truths.
For the natural sciences at least, we would like to defend this. In the social
sciences it is doubtful whether it makes sense to talk about 'the facts/the truth',
in particular the nomic ones, even if we fix the conceptual means beforehand,
as has been presupposed throughout this section.
Referring to the survey of bifurcations of actual and nomic truthlikeness
given in the previous chapter, at the end of the next chapter we will briefly
come back, as suggested earlier, to the correspondence theory of truth in the
context of epistemologically stratified theories (Chapter 9). Moreover, it will
198
Introduction
199
Recall the problem of actual truthlikeness: what does (can) it mean that one
description is closer, or more similar, to the true description of the actual world
than another description. Here we will restrict our attention to the solution of
that problem, dealt with in Section 7.1., for propositional structures, or propositional constituents. These constituents are conjunctions of negated and
unnegated elementary propositions belonging to a fixed finite set: EP =
{pI, p2, ... , pn}. A constituent can be represented by the set of its unnegated
elementary propositions. Let x, y, etc. indicate subsets of EP representing
constituents in this way.
When the claim of a description x, viz., "x = t", where t represents the true
description, is true or false with respect to a particular elementary proposition,
that proposition is called a 'match' or 'mistake' of the description, respectively.
Hence, the set of matches of x is (x n t) U (x n i), and the set of mistakes is
(x  t) U (t  x). The definition of y is closer to the truth t than x, (Di) and
(Dii), Subsection 7.1.2., amounted to the claim that the set of matches of y
properly includes that of x or, equivalently, that the set of mistakes of y is a
proper subset of that of x.
8.3.1.1. Dialectical approach of the actual truth: logical version
Let us start with one of the main dialectical concepts. The idea of dialectical
negation seems to be composed of at least three partial intuitions: contradicting,
incorporating and superseding what is negated. Note that 'superseding what is
negated' implies that what is negated has to be false or at least cannot be the
whole truth. This implication might be considered as a separate intuition, but
we will neglect it further because it is implied by the three basic intuitions. We
will now specify the basic intuitions in terms of statements. When statement
S* is a dialectical negation of S, S* contradicts S, it incorporates what was
good of S and supersedes S in this respect.
Descriptions, as interpreted above, are statements, due to their claims. Now
it is easy to check that when y is closer to the truth t than x, they (i.e., their
200
201
Dialectics is said to apply not only to knowledge claims (above and below)
and concepts (below), but also to reality, in particular to the development of
the actual world, which might be called ontological dialectics of the actual
world. Of course, if the actual world changes, the actual truth changes as well.
It is now plausible to focus on successive actual states of the world and to
attach indices, indicating successive moments of time, to the momentary true
representation t, leading to: t1, t2, ....
We begin by relativizing all judgments to an arbitrary later state. In the light
of t3, t2 is a dialectical negation of t1 iff t2 is more similar to t3 than t1. In
the light of t4, t3 is a double negation of t1 with t2 as intermediate iff t2 is
more similar to t4 than t1 and t3 is more similar to t4 than t2. In the light of
t4, <t1, t2, t3) is a (symmetric) thesisantithesissynthesistriad iff t3 is more
similar to t4 than t 1 as well as t2, whereas t2 is not more similar to t4 than t1,
nor the converse.
These qualifications of developments acquire an absolute character of a
strongly metaphysical nature when there is supposed to be something like a
final state, the absolute, in the light of which all judgements are made, even if
one does not believe in an ideal conceptual frame. It gets a still stronger
absolute character when it is supposed, in addition, that the conceptual frame
is ideal.
8.3.2. Nomic dialectics
Recall that the second problem of truthlikeness concerned the idea that one
theory is closer, or more similar, to the true theory about what is nomically
possible than another theory. Unlike the situation in the actual perspective,
there was in the nomic perspective no reason to restrict the exposition to a
particular kind of structure. We only have to assume the Nomic Postulate,
that is, for a given domain and Mp, there is a unique subset T of nomic
possibilities. X, Y, etc. indicate again subsets of Mp.
When the claim of a theory X, viz., "X = T" is true or false with respect to
202
203
For the actual perspective, we will again restrict our attention to propositional
descriptions. Recall that explicit descriptions were supposed to be given in total
204
Far from being known, the nomic truth T is, in theory directed empirical
sciences, the 'great unknown'. We have seen in the previous chapter that
205
206
negation in the logical sense, then it has always to be at least as successful and
it will remain more successful and hence retain its status as datarelative
dialectical negation in the light of all possible new evidence.
Again, when the conceptual frame is supposed to be ideal, the implicit goal
T is the absolute in a conceptually nonrelative sense.
Concluding remarks
We start with some remarks about our explication of dialectics, leading to
some general points. Since the structuralist theory of truthlikeness can be made
more sophisticated in several ways, in particular epistemological stratification
and refinement (Chapter 9 and 10), such sophistication may well lead to
refinements of the proposed explications of dialectical concepts. In Chapter 10
we will argue that (structuralistically represented) 'concretization' can be seen
as a special, refined kind of dialectical negation or correspondence, in harmony
with the corresponding claims of Izabella Nowakowa ( 1974) and Leszek Nowak
(1977). However, what has been presented will be enough for formally interested
dialecticians to decide whether the structuralist theory of truthlikeness can help
them to clarify their concepts.
One objection we can predict already. Although, as mentioned, the dialectics
of theories can be reinterpreted as the dialectics of concepts, dialecticians will
argue that the dialectics of concepts must involve more, in particular the
development of what we called the conceptual possibilities constituting the
conceptual space, which we kept constant. As a matter of fact Thagard (1982)
deals with this problem from the general structuralist point of view (not
specifically related to truthlikeness). The fact that we had several technical
hesitations about Thagard's paper inspired and motivated us very much to
design explications within a fixed conceptual space.
Of course, in general, assuming a fixed conceptual frame is a strong idealization, about which we would like to make some closing remarks. T was introduced as the set of nomic possibilities. In standard structuralist terms, this set
is one possible specification of the (unstratified, basic version of the) socalled
set of intended applications or intended domain. T is a subset of Mp and hence
represents the intended domain within the conceptual means of Mp, i.e., it is
the intended domain 'as seen through Mp'. It is evident that Mp is manmade,
hence T, being Mpdependent, is manmade. Hence we subscribe to a fundamental form of conceptual relativity and constructivism. But this need not imply
an extreme form of relativism: claims of theories and hypotheses are objectively
true or false, for their truth or falsehood depends on the natural world.
On the other hand, we see that the objective character of claims does not
imply that the intended domain and the conceptual frame (Mp), and hence the
nomic truth T, are fixed beforehand, and that the only task that remains is to
formulate a subset X of Mp leading to a true theory claim. As a matter of fact,
207
Introduction
Thus far it might seem that our conceptually relative point of departure leads
to an extreme form of relativistic (nomic) realism. However, this would only
be the case if we excluded constraints between the truths generated by different
vocabularies for the same domain. In this chapter we will deal with the relation
between an observational and a theoretical (cum observational) level, generated
by an observational and an encompassing theoretical vocabulary for the same
domain. Here the distinction between observational and theoretical components is, of course, assumed to be not of the classical, absolute form but of a
sophisticated, relative kind. In particular, the distinction is supposed to reflect
as much as possible what scientists consider as observable versus nonobservable/theoretical, including the fact that the distinction changes when
theories are accepted and become observation theories. However, here we
assume a temporarily fixed distinction. It will generate not only different kinds
of (basic) truthlikeness, but also several relations between them.
Besides the intralevel Nomic Postulate, we will introduce two interlevel
postulates, the Truth Projection Postulate and the Fiction Postulate. 1 On their
basis it will be possible to prove the Projection Theorem stating that, under
certain conditions, more truthlikeness on the theoretical(cumobservational)
level is projected on the observational level. In other words, more theoretical
truthlikeness implies more observational truthlikeness. Together with the old
Success Theorem, according to which more observational truthlikeness implies
more successfulness, we get the corollary that, under the same extra conditions,
more theoretical truthlikeness implies more successfulness. In the reverse direction we get, by a combination of the old Forward Theorem and the new
Upward Theorem, suggested by the Projection Theorem, that RS and hence
HDevaluation is functional for theoretical truth approximation.
We will first present the Projection and the Upward Theorem in the suggested
straightforward sense. It will turn out to be possible to define, on the basis of
the Nomic Postulate applied to a given vocabulary and domain, which theoretical terms really refer and which do not. The two new theorems can then also
be derived in two steps. One between observational truthlikeness and truthlikeness on the level of referring terms, called 'substantial truthlikeness', and the
other between substantial truthlikeness and theoretical truthlikeness.
208
209
Introduction
210
a subvocabulary V' in the sense that, for all x in Mp(V), n(x) is the result of
dropping the extra components in x and the structure clauses using them.
Technically speaking, this means that n(x) is a substructure of x belonging to
Mp(V').2
In the context of n, it will always be clear what V and V' are. <v, Mp(V) is
also called a (conceptual) level, and a superlevel of <V', Mp(V'), whereas the
latter is called a sublevel of the former. For a subset X of Mp( V), nX indicates
the set of projections of all members of X, being a subset of Mp(V'), and will
be called the V'projection of X. Conversely, for a subset X of Mp(V'), n I X
is the subset of members of Mp(V) with projections in X and is called the Vreproduction of X. Note that nlnX is always a superset of X, and mayor
may not coincide with X. Both n{x} = {n(x)} and n(x) itself are called the V'projection of x, and n 1 {x} the Vreproduction of x (note that nI(x) has not
yet been defined, hence it might be equated with n  1 {x}).
9.1.1. Theoretical truth approximation
Let there be given a domain of interest D. We will first study the relation
between an observational (0 ) level and a theoretical (t ) level for D. In the next
subsection we will take the intermediate, referential (r)level of really referring
terms into account. Let Vt indicate the richest vocabulary of nonlogicomathematical terms that playa role in one of the theories involved, that is, all
relevant observational and theoretical terms which are in consideration. Let
Vo indicate the subset of Vt of observation terms.
Let Mp(Vo) and Mp(Vt) indicate the set of conceptual possibilities on the 0and tIevel, respectively. Applying the Nomic Postulate on both levels, these
sets generate, in combination with D, unique, timeindependent subsets of
nomic possibilities on each level, called the observational (nomic) truth To and
the theoretical (nomic) truth Tt, respectively. The term 'nomic' will from now
on be omitted. Moreover, in this chapter truthlikeness of theories on any level
will always refer to the basic notion defined in Chapter 7.
We will assume that nTt = To, the Truth Projection (TP ) Postulate (from
the tIevel to the olevel), indicated by TPP(t, 0). That nTt is a subset of To
is a semantic fact, for if a conceptual possibility on the tIevel is nomically
possible, it will remain nomically possible when one skips some components.
The motivation for the substantial side of the TPPostulate, viz., To is a subset
of nTt, will be postponed till the introduction of the intermediate referential
level. Although we do not see how TPP(t, 0) could be violated, we will nevertheless mention TPP(t,o) in proof sketches when the substantial side is
presupposed.
The important question is, of course, whether not only the truth but also
truthlikeness on the tIevel is projected on the olevel, that is, assuming that X
and Yare subsets of Mp(Vt), does MTL(X, Y, Tt), i.e., Y is more truthlike than
X or, more precisely, Yis at least as close to Tt as X, imply MTL(nX, nY, nTt),
211
Mp(Vo)
empty ?
Figure 9.1. Theoretical truthlikeness (shaded areas Mp(Vt) empty) implies external observational
truthlikeness (shaded a rea Mp(Vo) empty), but not internal (?area may be nonempty)
and hence, using the TPPostulate, MTL(nX, nY, To)? Figure 9.1. depicts the
formal situation.
On the 'external side' we have to prove that the external (law) clause of
truthlikeness on the tlevel implies that on the olevel. Hence, using the (Bi)'version of the external clause, to be proved is that the emptiness of Y  X U Tt
implies that of nY nX U nTt, which immediately follows from the fact that n
is a function. Moreover, the semantic side of the TPPostulate guarantees that
nY nX U To is empty when nY nX U nTt is.
A similar unconditional proof is impossible for the 'internal side', essentially
due to the manyone character of the projection function. We will first characterize the gap and then specify a sufficient condition to fill that gap. Suppose that
the internal clause of "Y is at least as close to Tt as X" holds, that is, according
to (Bii)', suppose that X n Tt  Y is empty. What we would like to prove is
that nX n To nYis empty. This would immediately follow from the emptiness
of nX n nTt nY, by the substantial side of the TPPostulate, i.e., To is a
subset of nTt. Hence, from this perspective, it remains to be proved that
nX n nTt  n Y is empty.
What is the nature of possible elements of nX n nTt  nY? (Bii)' excludes
that they are projections deriving from X n Tt  Y. Since members of
X n Tt  Y would be extra 'theoretically correct' models of X , we might say
that X has no extra observationally correct models deriving from theoretically
correct models. If nX n nTt nY is nevertheless nonempty, its members must
212
n(X
n Tt) = nX n nTt
which will sometimes be indicated by RC(X, t, 0), to indicate the relevant levels.
Note that RC(X) has been defined such that it is independent of the
TPPostulate. The name of the assumption is based on the fact that it precisely
guarantees that (relative to rrTt) correct models of nX can be extended to
(relative to Tt) correct models of X. In other words, it guarantees that X has
theoretically correct reproductions of all its ocorrect models. How strong is
RC(X)? To be sure, it means something like: X is on the right track, with the
213
214
215
9.1 .2. Substantial truth approximation and the innocent role of fictions
Now we will introduce the intermediate level of terms that really refer to
something real concerning the domain, called the substantial or referential
(r) level. For term, in Vt, let Vt'indicate Vt  {,} and let n project Mp(Vt)
onto Mp(Vt'). Then, is said to refer according to Tt, or simply really refers or
Ttrefers, if and only if Tt, being on formal grounds a subset of n 1 nTt, is a
proper subset of n 1 nTt. That is, Tt is a proper subset of the reproduction of
the projection of Tt. This defining clause, the Ttreference clause, is not yet to
be interpreted as a reference criterion. It amounts to the claim that, makes a
difference or plays a substantial role in the theoretical truth Tt in the sense
that Tt is constrained by , in the sense defined. With this definition, we formally
operationalize the idea that there is in reality some (type of) item which plays
the role , is ascribed to by Tt. 4
The definition still needs some qualification. It seems adequate primarily for
theoretical terms purporting to refer to attributes (properties, relations, functions); attribute terms for short. However, for theoretical terms purporting to
refer to classes of entities, used as domainsets for attribute terms  entity terms
for short  this definition does not seem to work. However, it is plausible to
say that a (theoretical) entity term in Vt Ttrefers if and only if there is at least
one (theoretical) attribute term that Ttrefers and that uses the entity term as
(one of) its domainset(s). The consequence is that an entity term does not Ttrefer if there are no attribute terms using it as domainset. However, in this
case it is difficult to see how that theoretical domain could playa substantial
role in Tt, which is precisely the reason for the detour via attribute terms. If
this detour is not possible, the theoretical entities hang in the air as unconstrained entities, not distinguishable from genuine fictitious entities, at least
not with the means provided by D and Vt.
Note that the very possibility to define reference of entity terms on the basis
of the definition of reference of attribute terms is a good reason to extend the
idea of 'entity realism' to 'referential realism', as has already been suggested in
Chapter 1.
The given definition of reference is explicitly Ttrelative. We might abstract
from this by the definition that a term, refers in an absolute sense if there is
at least one domain and if , belongs to at least one vocabulary such that ,
refers relative to the resulting theoretical truth.
For the moment we define the referential vocabulary Vr(Tt) or simply Vr,
called the referential truth, giving rise to the referential (r ) level, as the union
of Vo and those members of Vt that Ttrefer. Hence, by definition, Vo is a
subset of Vr and Vr is a subset of Vt. So we assume that it has already been
established that the observation terms refer in some other context. They may
or may not refer in the present context, i.e., they mayor may not Ttrefer.
Let Mp(Vr) indicate the set of conceptual possibilities on the rlevel. By
applying the Nomic Postulate on this level, Mp(Vr) generates, in combination
216
217
and, finally,
ojr Upward Theorem:
if n Y is closer to To than nX then y* is closer to Tr than X*, where
X* =defX  {x E X  Yjn(x) E nyn To}
Y*=def xn Y
On the basis of these results we can motivate the claim that RS, and hence
HDevaluation, are functional for substantial truth approximation in formally
the same way as we motivated that they are functional for theoretical truth
approximation.
Let us now turn to the relation between the rlevel and the tlevel, and hence
the relation between substantial and theoretical truth approximation. Till further notice, n now refers to the projection of Mp(Vt) onto Mp(Vr). Recall that
the terms in Vt  Vr are supposed not to Ttrefer in the sense that they do not
curtail the nomic possibilities on the tlevel. Formally, it is assumed that none
of these terms satisfies the Ttreference clause. In other words, as far as Tt is
concerned, they are fictions. Hence, we may assume that Tt is the Vtreproduction of Tr, i.e., Tt = n  I Tr, which will be called the Fiction (F ) Postulate
(FP(t, r)). This name is somewhat misleading, for the mentioned condition is
only a necessary feature of fictions. That Tt = n  1 Tr is not a sufficient condition
for letting Vt  Vr concern fictions is due to the fact that one or more extra
terms may not narrow down the number of nomic possibilities in coproduction
with Vr, but there may be other referring terms, outside Vt, that would do.
However this may be, note that Vr could have been defined as the smallest set
V' between Vo and Vt such that its truth T' satisfies the fiction postulate
relative to Tt.
Now it is important to note and easy to check that the FPostulate implies
the TPPostulate between Tt and Tr, i.e., n I Tt = Tr, indicated by TPP(t, r).
Hence, in combination with the motivation of TPP(r, 0) we may conclude that
the motivation for TPP(t, 0) has been completed.
As already explained, RC(X, t, 0) for a subset X of Mp(Vt) and RC(X, r, 0)
for a subset of Mp(Vr) are not very strong. Since Tt is, due to the FPostulate,
as large as possible, RC(X, t, r) for some subset X of Mp(Vt) becomes even
trivial. That is, every theory will satisfy it, as is not difficult to check. Moreover,
FP(t,o) guarantees that for subset X of Mp(Vt) RC(X, t, 0) iff RC(nX, r, 0),
for its Vrprojection nX.
In sum we may now derive the unconditional
tjr Projection Theorem:
if Y is at least as close to the theoretical truth Tt as X then n Y is at
least as close to the substantial truth Tr as nX,
formally: MTL(X, Y, Tt) implies MTL(nX, nY, Tr)
218
It is easy to check that, using RC(X, t, 0) iff RC(nX,r, 0), the t/r and the rio
Projection Theorem together imply, by a kind of transitivity, the straightforward (t/o) Projection Theorem.
Of course, we can also derive
219
From this section we may conclude that there is a strong relation between
more observational success and the three thusfar considered kinds of truthlikeness: observational, substantial and theoretical. Empirical progress supports
all three kinds of corresponding truth approximation claims in a clearly defined
way. The interesting remaining question is whether empirical progress supports,
in addition, the corresponding referential truth approximation claim.
Introduction
In this section we will first give a precise definition of the referential claim of
a theory and the idea that one theory is closer to the referential truth than
another. It will turn out that theoretical truthlikeness does not imply referential
truthlikeness, with the consequence that empirical progress only weakly supports referential truth approximation. For this reason we will look for other
kinds of theoretical and experimental arguments in favor of referential truth
approximation claims.
9.2.1. Referential truth likeness
220
the referential truth becomes the referential claim of Tt, hence the strongest
true referential claim about Vt.
For subsets X and Y of Mp(Vt) we define, of course, the comparative claim
"theory Y is at least as close to the referential truth Vr as X", or "Y is
referentially at least as close to the truth as X", if RC Y is at least as close to
the referential truth as RCX in the sense of propositional actual truthlikeness,
defined in Chapter 7, ranging over all elementary propositions of the form: "r
refers" for all r in Vo and "r Ttrefers" for all r in Vt  Yo. In terms of sets this
amounts to:
for all r in Vr if r belongs to Vr(X) then it belongs to Vr(Y)
(Vr () Vr(X) s; Vr( Y))
S;
Vr(X))
The interesting question concerns what the relation is between more referential and more theoretical truthlikeness. Whereas it is not plausible to expect
that the first implies the second, the reverse may seem plausible, in particular
because the theoretical claim of a theory "X = Tt" implies the referential claim
"Vr(X) = Vr(Tt)". However, a proof for the suggested conjecture is not possible,
for interesting reasons. Suppose that MTL(X, Y, Tt). What we would like to
prove amounts to the following: all r in Vr( Y)  Vr(X) belong to Vr and all r
in Vr(X)  Vr(Y) do not belong to Yr. Let us suppose that r in Vr(Y)  Vr(X)
does not belong to Yr. Hence, Y wrongly claims that it does, whereas X rightly
claims that it does not. However, X's claim may well be based on a wrong
aspect of its theoretical claim, whereas Y may not yet be so good that its
theoretical claim implies the reference claim for the right reasons. It is important
to note that the suggested proof already fails in the case of just one theoretical
term. Moreover, a restriction to theories which are trueashypothesis does not
alter the situation. Similar arguments can be given for the possibility that r in
Vr(X)  Vr(Y) may belong to Yr.
It may seem interesting to enquire whether the relation holds when we only
look at the referential level. Does substantial truthlikeness imply referential
truthlikeness as far as referring terms are concerned? Of course, for subsets X
and Y of Mp(Vt) and n indicating the projection on Mp(Vr), "nY is at least as
close to the referential truth Vr as nX" amounts to (as far as Vrterms are
concerned ):
for all r
In
(Vr () Vr(nX)
S;
221
that being closer to the theoretical truth or to the substantial truth provide
good arguments for supposing that the relevant theory is closer to the referential
truth on the corresponding level. For as can be learned from both types of
proofattempts, the suggested entailments are only violated when a theory
bases its referential claim on wrong reasons in precisely the right direction,
which would be rather exceptional because it is rather artificial. 6
In the meantime, it is a plausible suggestion that referential truthlikeness on
the theoretical level does imply referential truthlikeness on the referential level.
However, this is also not the case, for similar reasons. The suggested (tr)
projection of referential truthlikeness would be easy to prove if Vr(nX) =
Vr n Vr(X) held in general, but this is not the case. It is only guaranteed that
Vr(nX) is a subset of Vr n Vr(X), a subset relation which is even more or less
trivial. But there may be a term in Vr n Vr(X) which does not belong to Vr(nX).
That is, there may be a Vrterm about which X, apparently rightly, claims that
it Ttrefers, whereas nX no longer claims that it Ttrefers. This occurs when
the claim of X essentially uses non Ttreferring terms which have been dropped
in nX. In other words, a theory may have a true specific reference claim for
the wrong reasons, in the particular sense of using wrong, nonreferring, other
means. Due to this possibility, referential truth approximation on the theoretical
level need not be projected on the referential level. In particular, the suggested
transition from the claim "theory Y is referentially at least as close to the truth
as X" to the claim "n Y is referentially at least as close to the truth as nX" is
invalid iff Y loses a true specific reference claim based on wrong reasons, while
X does not lose that claim, apparently, since X based it on better grounds.
Since the suggested projection claim is invalid, it is plausible to focus,
henceforth, on referential truthlikeness on the theoretical level, where, so to
speak, all specific reference claims are taken equally seriously.
9.2.2. Referential truth approximation and the assessment of reference claims
What are the prospects for referential truth approximation on the basis of
more observational success (empirical progress) or even observational truthlikeness? Given that the kind of proofs required for this purpose are formally
similar to those searched for in the previous cases, there do not seem to be
interesting theorems in this respect. At least, we did not find any.
In this subsection we will discuss in general what arguments can be given
for specific and general, separate and comparative, reference claims. First we
will deal with theoretical arguments, followed by experimental and then combined ones. Then we will deal with the consequences of the acceptance of
specific reference claims together with experimental and/or theoretical criteria
for applying them, that is, when a shift in the Observable/Unobservable
Distinction has taken place. This is called an OUDshift by (Douven 1996).
Before we start with theoretical arguments that can be used to accept or
reject such reference claims, it is important to make clear that we presuppose
a sharp distinction between the question of whether a term refers and whether
222
a term, as used by different theories and theorists, refers to the same item.
There is the famous debate (Laudan 1981, Hacking 1983, Radder 1988) on
whether the notion 'electron', as used by Lorentz, Bohr, Schrodinger, Dirac
and in modern quantum electrodynamics, refers to the same item. Such 'coreference claims' are certainly interesting, but they essentially presuppose that
all these terms refer anyhow, and suggest that straightforward reference claims
can be evaluated.
As we have shown, starting from the Nomic Postulate, it is possible to define
reference and reference claims in such a way that we do not have to specify in
detail what role the hypothesized item is supposed to play. To be precise, what
we need to be able to specify is the formal character of a term, that is, whether
it is an entity term, supposed to refer to a (kind of) entity, or an attribute
term, supposed to refer to a property, relation or function . For an attribute
term, it will be possible to specify what the relevant (sub)domains are. To
define Ttreference, in the relevant abstract sense, this is essentially enough.
To define that a certain theory implies a certain Ttreference claim, we need
to specify the theory itself, which implies in principle a specification of the role
the item is supposed to play. Hence, the reference claim of a theory is usually
conceived as the claim that the term refers and that the relevant item plays
such and such a role. As we have seen, our definition leaves room for the
possibility that the primary claim is true and that the additional 'roleclaim' is
false, and hence that the reference claim can be taken in its pure form .
As far as coreference claims are about theories using (part of) the same
vocabulary, it is clear that it is now possible to say that these theories refer to
the same item, without attributing to it the same role. They refer to the same
item if the term Ttrefers and each of them attributes to it a somewhat different,
and probably mistaken, role. For example, let us suppose, for a while, that
Sommerfeld's version of the 'old quantum theory' of the oneelectronatom
corresponds to the theoretical truth and hence implies the referential truth. In
that case, Rutherford's and Bohr's versions were referring to the same electrons,
but ascribing different roles to them. To be sure, to extend such claims in a
responsible way to theories using fundamentally different vocabularies is not
an easy task, and we will not go into that issue further.
Let us now start with the separate evaluation of the (general) referential
claim of a theory X, "Vr(X) = Vr" , RCX. For accepting it, it is necessary to
argue for all component claims. For rejecting it, it is sufficient to argue against
one. Of course, it may well be possible to argue in favor of the total claim by
an argument in favor of all claims at the same time. Similarly, it may be
possible to argue against the total claim in one stroke. Such arguments will be
called holistic.
Let us first suppose that the evidence is such that X has not yet been falsified
and that it explains all established laws, i.e., R s; rrX s; S. This observational
success of X, indicated by OSX, deductively confirms the observational claim
"rrX = To" of X (OCX), for that entails OSx. Since, the theoretical claim
223
<
<
p(RCX)
<
p(RCrX)
p(TCX/OSX)
<
<
<?
p(RCX/OSX)
<
< ? p(RCr X/OSX)
p(TCX/OCX)
<
< ry
p(RCX/OCX)
<
< ? p(RCrX/OCX)
Now it may well be that p(RCX), and hence p(TCX), is (much) smaller
than 1/2, whereas p(TCX)/OSX), and hence p(RC/OSX), becomes larger
than 1/2. The latter may, in particular, happen when OSX has become such
that p(OCX/OSX) approaches I, for p(TCX/OSX), which is equal to
p(OCX /OSX) p(TCX/OCX)
+ p(nonOCX/OSX) p(TCX/nonOCX)
then approaches p(TCX/OCX), which may indeed well be larger than 1/2.
In confirmation terms, note first that OCX (deductively) confirms TCX
and OSX. Note further that the first joint assumption, i.e., p(RCX)
< 1/2 < p(TCX/OSX), is also a sufficient, but by no means a necessary,
condition for p(RCX) < p(RCX/OSX), i.e., OSX nondeductively confirms
RCX (Section 3.1.). Similarly, the second joint assumption, i.e., p(RCX)
< 1/2 < p(TCX/OCX), is also a sufficient, but not a necessary, condition for
p(RCX) < p(RCX/OCX), i.e., OCX nondeductively confirms RCX.
224
The suggested transition itself amounts, in Dorling's terms, to the claim that
a positivist attitude towards the referential claim of X, and a fortiori to its
theoretical claim, may well transform into a realist attitude towards its theoretical claim, and a fortiori towards its referential claim. It should be noted that
Dorling in fact deals with weaker claims. In our terms, they are (almost)
equivalent to: Tt is a subset of X, Vr(X) is a subset of Vr(Tt), To is a subset
of nX, and To is a subset of nX However, the formal argument is essentially
the same. Dorling illustrates the historical reality of the suggested conversion
with many examples, notably Dalton's theory of the atom. To be sure, Dorling
also notes that the conversion may also move in the reverse direction, for
various reasons.
Of course, as indicated in the last row of Table 9.1., the same argument may
be applied to each specific reference claim of X with respect to r in Vt(  Yo),
indicated by RCrX Whether we deal with the general claim RCX or a
specific claim RCrX, the argument essentially uses their entailment by TCX,
and its entailment of OCX and hence OSX. Since these concern whole
theories, these theoretical arguments are of a holistic nature. It may well be
that there are theoretical arguments for specific reference claims, but we do
not know them.
Let us now suppose that X has been falsified by the evidence and hence that
we have to conclude that OSX is false and hence that OCX and hence TCX are also false. Now the qualitative and the quantitative argument collapse.
In quantitative terms, this means that the relevant posterior probabilities
p(OCX/notOSX) and p(TCX/notOSX) become (instead of becoming
larger than 1/2). To be sure, it does not imply that p(RCX/notOSX) or
p(RCX/notOCX) also become 0, for the referential claim itself has not been
falsified. However, the evidence also does not confirm it, to say the least.
Finally, suppose nX is not a subset of S and hence that X does not entail
all established laws. In that case, we also have to conclude that OSX has been
falsified, with the same consequences as before.
Returning to the case of favorable evidence, confirming the referential claim,
it is worthwhile discussing the wellknown challenge of Laudan (1981) that
there have been in the history of science many examples of very successful
theories, that nevertheless use, according to our present lights, terms that do
not refer. From our analysis thus far it is clear that this is perfectly possible.
That is, it is perfectly possible that RCX, and hence TCX are false, but that
nevertheless TCX implies OCX and hence OSX Hence, TCX may be false
and nevertheless be confirmed by the evidence, without having a true referential claim.
Let us now turn to the evaluation of the comparative claim that theory Y
is closer to the referential truth Vr than theory X To accept such a comparative
claim, it is necessary to argue for all component claims of Y about which X
disagrees, and to reject it, it is sufficient to argue against one such claim (e.g.,
if Y claims that r Ttrefers then argue that it does). But again there may be
225
holistic arguments in favor of all deviation claims at the same time, or against
the total deviation claim in one stroke.
Recall that MSF(X, Y, R/S) indicates that Y is observationally more successful than X, and that MTL(nX, nY, To) and MTL(X, Y, Tt) indicate that Y is
observationally and theoretically closer to the truth than X, respectively. Let
MTL(Vr(X), Vr(Y), Vr(Tt, or simply MR(X, Y, Tt), indicate that Y is referentially closer to the truth than X. We further assume that X is relatively
correct. Now, due to the Success Theorem, MSF(X, Y, R/S) dconfirms
MTL(nX, nY, To) which, due to the Projection Theorem, dconfirms
MTL(X, Y, Tt) on the condition that X is relatively correct (RC(X. By
transitivity, MSF(X, Y, R/S) also dconfirms MTL(X, Y, Tt) on that condition.
Moreover, as a rule, we will also assume that MR(X, Y, Tt) and
MTL(nX, nY, To) and even MSF(X, Y, R/ S) strengthen each other, though
there is no deductive relation. In other words, MTL(nX, nY, To) and
MSF(X, Y, R/S) nondeductively confirm MR(X, Y, Tt).
Thus far the situation is formally the same as for the separate claims.
However, since MTL(X, Y, Tt) does not entail MR(X, Y, Tt), we can not
transplant Dorling's argument that, starting from an inclination to reject
MR(X, Y, Tt), and hence MTL(X, Y, Tt), we may well be inclined to accept
MTL(X, Y, Tt) and hence MR(X, Y, Tt) . Of course, the fact that the quantitative argument is no longer valid does not exclude that the crucial condition
for a 'comparative referential conversion', viz., p(MR(X, Y, Tt < 1/ 2 <
p(MR(X, Y, Tt)/MSF(X, Y, R/S, applies. In qualitative terms, the evidence
may well provide good arguments for the comparative referential conversion.
More generally, despite the fact that there is no strong supportive relation
between empirical progress and referential truth approximation, a kind of
default rule seems defensible: if Y has so far proven to be more successful than
X, then conclude, for the time being, that Y is at least as close to the referential
truth as X, except when there is evidence to the contrary. We will return to
such a rule in Subsection 9.3.1.
Suppose now that MSF(X, Y, R/S) is supposed to (nondeductively) confirm
MR(X, Y, Tt) such that we tend to accept it. What does the acceptance of
MR(X, Y, Tt) mean for the specific reference claims of X and Y? As far as
terms about which X and Y agree are concerned, nothing is accepted. As far
as their claims differ, the claim of Y is accepted, and hence the opposite one
of X is rejected. More precisely, for all , in Vr( Y)  Vr(X) the claim is that,
Ttrefers (and hence belongs to Vr), and for all , in Vr(X)  Vr( Y) the claim is
that, does not Ttrefer. Since the successes and failures of both theories as a
whole are at stake in MSF(X, Y, R/S) the theoretical arguments for these
specific claims of Y opposing those of X are again of a holistic nature.
Let us look at a similar argument to the one of Laudan in relation to the
comparative referential claim. Suppose that there are historical examples of
pairs of theories <X, Y) for which holds that at the time Y was more successful
than X, whereas according to our present standards X was referentially closer
226
to the truth than Y. Again, this is perfectly possible, for using referring vocabulary by no means implies formulating very successful theories with it. The real
challenge of the Laudan type is hence to find pairs of theories <X, Y ) that
satisfy the two mentioned conditions, such that X is, moreover, the best known
theory of those using the same vocabulary as X, or more precisely, having the
same referential claim as X. Though we cannot formally exclude that such
pairs exist, it seems highly unlikely as long as nobody has provided one.
In sum we may conclude that for acceptance and rejection of specific reference
claims on theoretical grounds, it is important to relate the reference claim to
the theoretical truth Tt corresponding to a certain domain D and a certain
theoretical vocabulary Vt. Then we may take as pragmatic necessary and
sufficient criterion that a term r Ttrefers if and only if it is supposed to refer
according to the best available 'theory in Vt' in the sense defined above: for
attribute terms, Tt is a proper subset of the reproduction of its projection on
Vt  {r} , and for entity terms, there are Ttreferring attribute terms using the
entity terms as domainset. To be sure, the best available theory may be
mistaken.
Probably because there is no clearcut deductive relation between maximal
successfulness and the referential claim, let alone between being more successful
and having more referential truth likeness, the suggested arguments may not,
however, be convincing. There is an ongoing debate about criteria of reference
in the literature. As Hacking (1983) and Cartwright (1983) have tried to argue,
it may not be important nor even desirable that theoretical claims are true; it
is certainly desirable that our theoretical terms refer. Happily enough, they are
not too pessimistic about criteria for accepting or rejecting such reference
claims. This debate concentrates on experimental criteria, to which we now
turn our attention.
Hacking's manipulation criterion is perhaps the most known and defensible.
In a somewhat broader form than he presents it himself, it amounts to the
following: an entity term and the relevant attribute terms refer as soon as we
regularly do experiments in which we manipulate these entities using the
various properties attributed to them in order to interfere in other more
hypothetical parts of nature. Hacking and Cartwright, defending 'entity realism',
focus on the reference of entity terms, or on the reality of theoretical entities,
and restrict the reference criterion for them to the causal properties attributed
to them. To be sure, the manipulation criterion is a (pragmatic, hence fallible)
sufficient condition. It can only be used for accepting reference claims, not for
rejecting them. Radder (1988,148 153, 1996,7392), defending a strong form
of 'referential realism', beyond theory realism, (see Chapter 1), has developed
another, but to some extent comparable, manipulation criterion which also
covers attribute terms, and which he uses for the formulation of a coreference
criterion. Of course, when speaking of reference of attribute terms, it is always
reference via entities having the corresponding attributes.
227
228
Introduction
We will first extend the corrective analysis of inference to the best explanation,
introduced in Section 7.4. Moreover, we will formulate speculations about
commensurability and ideal languages, and extrapolate the analysis of
Section 9.1. and 9.2. to the comparison of vocabularies and research programs.
We will also explicate the notion of theoretical or explanatory research program.
9.3.1. Inference to the best theory
229
The corrections of IBTi compared to IBEi are the following: unlike IBEi,
IBTi is not restricted to unfalsified theories, and IBT i amounts to a comparative conclusion attached to a comparative premise, whereas IBEi amounts to
an absolute conclusion to a comparative premise. Finally, it is directly justifiable
in terms of truth approximation, viz., by the Forward Theorem and the relevant
Upward Theorems, whereas a justification of IBEi for any i is difficult to
imagine.
IBTo may well be acceptable for the constructive empiricist as far as he is
willing to take the notion of the nomic truth on the observational level seriously,
which does not seem to be the case for Van Fraassen (1980). Moreover, it is
not easy to see why an instrumentalist could hesitate to subscribe to it. IBTr
on the referential level will be acceptable for the (theory) realist. Whether IBTt
is also acceptable simply depends on whether one is willing to speak of true
or false claims when there may be fictions involved. To be sure, the truth on
that level does not essentially use fictional terms, for it is merely a reproduction
of the referential truth.
What about the referential realist? He or she will be willing to subscribe to
IBT0, but it is unlikely that somebody like Cartwright (1983) will subscribe
to IBTr, let alone to IBTt. On the basis of our analysis, she might conclude
that IBTr is defensible and hence innocent, but she will see it nevertheless as
unimportant. For the referential realist, the crucial question concerns, of course,
whether there is something like 'inference to the most likely cause' (Cartwright
1983), or in our terminology, and more generally, 'inference to the best
(total) referential claim'. In the light of our analysis, we may formulate it
as follows:
Inference to the best referential claim as the closest to the referential truth
(IBRCCRT)
If a theory in Vt has so far proven to be the best one among the available
theories, then (choose it, i.e., apply RS and) conclude, for the time being,
that its referential claim is the closest to the referential truth
Recall that if Y is the best theory in Vt, then IBRCCRT suggests concluding
that Vr(Y) is closer to Vr = Vr(Tt) = Vr(Tr) than Vr(X), as defined in
Subsection 9.2.1., for any alternative theory X. Note that it does not seem to
make sense to formulate IBRCCRT for the theories on the rIevel, for then
we have already assumed prior knowledge of which terms refer and which
do not.
Our previous analysis suggests that IBRCCRT is plausible, though its
conclusion is certainly, like the IBTi conclusions, not compelling. However, it
is without doubt more plausible than the straightforwardly generalized rule
suggested by Cartwright's 'inference to the most likely cause', where we may
or may not leave room for already falsified theories:
230
There is only one exception to the superiority of IBRCCRT over IBRCT (in
the best theory version). They coincide when there is only one theoretical term
at stake, which is Ttreferring according to the best theory.
9.3.2. Generality speculations
The TPPostulate is the first (conceptual) constraint leading away from absolute
relativism of vocabularies. The FPostulate is another one. In this subsection
we will formulate some other notions that make it possible to explore still
stronger nonrelativistic positions. Some of the notions that will be defined
(e.g., exhaustiveness) are related to (some version of) the logical notion of
socalled Ramseyeliminability, but we will not explore these relations here.
A vocabulary V for domain D, generating Mp(V) and a unique subset of
nomic possibilities T(V, D), guaranteed by the Nomic Postulate, is a (proper
conceptual) level for D if T'(V ', D) = T(V, D) for all vocabularies V' such that
Mp(V') includes Mp(V). In particular, V' might contain a relation, while V has
a function that happens to be sufficient to characterize T. Hence, when functions
are postulated it is always possible to enlarge the type of structure to a proper
level by replacing that function by a relation of the appropriate kind. For this
reason we may assume that Yo, Vt and Vr in the previous subsections generated
proper levels. From now on, vocabularies generate proper levels for D, fulfilling,
in line with the previous sections, minimally the following relation.
Vt is a superlevel of Vo (and Vo a sublevel of Vt) if Vo is a subset of Vt such
that nT(Vt, D) = T(Vo, D), the Truth Projection Postulate, where n is the projection function from Mp(Vt) onto Mp(Vo).
Recall that theory X (X ~ Mp( Vt reproduces theory X 0 (X 0 ~ Mp( Vo if
n 1XO = def {x E Mp(Vt)/n{x) E Xo} = X, which implies that nX = Xo, due to
the fact that nn  1 X 0 = X 0 holds in general.
It is now not difficult to prove the following commensurability claim: for any
two levels there is at least one common superlevel (possibly trivially defined
by 'concatenation' of tuples of components) on which all theories can be
reproduced and, hence, can be compared in principle. In Subsection 9.3.3. we
will show how this works in some more detail.
We say that Vt  V' consists of T'fictions if Tt = T(Vt, D) is the
(Vt)reproduction of T' = T(V', D), the Fiction Postulate. The complement of
the FPostulate for Vt  V' is the Reference Postulate for Vt  V': Tt is a proper
subset of the (Vt)reproduction of T(V", D) of any genuine sublevel V" of Vt
being equal to or a superlevel of V'. Under this condition, all terms in Vt  V '
Ttrefer. Vt is said to (Tt)refer if all its (theoretical) members Ttrefer.
Vt is exhaustive for D if it has no referring superlevel, i.e., there is no super level
231
Vt' such that Vt'  Vt satisfies the Reference Postulate. Note that if Vt and Vt'
are both exhaustive, then one has to be a subset or a sublevel of the other.
Finally, Vt is optimal if Vt is exhaustive for D and if it Ttrefers, i.e., in suggestive
terms, if Tt = T(Vt, D) is the whole, and nothing but, the (nomic) truth about D.
In the present perspective the classical ideal language assumption (ILA) can
be formulated as follows: for every domain there is an optimal conceptual level.
This seems to be the strongest nonrelativistic position. It amounts to extreme
metaphysical realism. Of course, ILA presupposes the Nomic Postulate. Hence
for the social sciences it is at least as problematic as that postulate itself is for
the social sciences. But ILA is certainly also problematic for the natural sciences.
Although ILA may regularly be fruitful as a heuristic principle, the following
conflicting principle seems to be better defensible (e.g by Popper) as a guideline:
although there may exist referring levels, there are no exhaustive levels, hence,
no optimal levels. In other words, for every level one can find a referring
superlevel. This heuristic principle may be called the refinement principle. It is
the fundamental assumption, for instance, underlying idealization and concretization. Every fortunate application of this principle not only leads to new types
of empirical success and hence empirical progress, but also, together with the
Nomic Postulate, to a deeper explanation of successes and success differences.
Since we subscribe to this principle, it is clear that we have to construct our
language. Besides this aspect, the name 'constructive realism' for our favorite
position is supposed to refer as well to (nomic) realism, that is, the intralevel
Nomic Postulate and the two interlevel postulates: the TP and the FPostulate.
9.3.3. Extended truthlikeness comparison
232
is a subset of Vt2
Vt2  Vrl2
The first clause guarantees that Vt2 does not loose Ttlreferring terms. The
second clause guarantees that the union of Vtl and Vt2 creates no extra
referring terms that belong to Vt I but not to Vt2. The third clause guarantees
that the, according to Tt12, nonreferring terms of Vt2 already belong to Vtl.
It is easy to check that this definition is such that Vtl2 is always at least as
close to Vr12 as Vtl. Hence, a fusion of vocabularies cannot become referentially
worse. Moreover, when no new referring terms are created, i.e., when
Vrl2  Vrl U Vr2 is empty, the second clause vanishes, and the third clause
reduces to the claim that Vt2  Vr2 is a subset of Vtl.
Turning to the truth likeness comparison of theories, and hence to the
improvement of theories with respect to truthlikeness, we will formally elaborate
the commensurability claim mentioned in the previous section. Note first that
a theory X can be more precisely represented by (D(X), Vo(X), Vt(X), X),
where X is a subset of Mp(Vt(X)) and where Vt(X) generates, in combination
with D(X), the theoretical truth Tt(X), the referential truth Vr(X) and the
substantial truth Tr(X). Let, in line with the commensurability claim, Vt(X, Y)
indicate the united vocabularies of Vt(X) and Vt(Y) and Tt(X, Y), Vr(X, Y),
233
234
may refuse to accept such claims. However this may be, the full evaluation
report of a theory accounts for observational and referential successes.
More or less analogously, we can introduce the notion of an evaluation
report of a vocabulary. We first deal with the evaluation report of Va. It
mentions the accepted observational generalizations and theories and the
accepted sequences of observational theories (# observation theories) of
increasing (observational) success. Note that such a report is holistic in the
sense that it judges Va as a whole. Similarly, assuming that there is associated
with each theoretical term of a vocabulary Vt the reference claim that the term
Ttrefers, the evaluation report of Vt mentions (in addition) the accepted and
rejected reference claims and, on the level of accepted reference claims, the
accepted hypotheses and theories, and the accepted sequences of theories of
increasing (observational) success. Again, all this is a holistic evaluation of Vt.
In sum, the evaluation report of a vocabulary accounts for observational,
referential and theoretical successes.
Finally, the evaluation report of a research program, refines the evaluation
report of the corresponding vocabulary by indicating whether or not an
accepted hypothesis or sequence of theories respects the core theory of the
program, all on the accepted referential level.
Let us now turn to the success comparison of theories, vocabularies and programs. For success comparison of theories it is, of course, not necessary that
they share the same theoretical vocabulary, let alone that they belong to the
same research program. We only have to assume that they share the domain
and the observational vocabulary, which mayor may not require some manipulation. This assumption is enough for the possibility that one theory is not
only observationally more successful than another (as defined before) but also
referentially more successful, in the sense that all accepted reference claims of
the second theory are (accepted) reference claims of the first, whereas all rejected
reference claims of the first are (rejected) reference claims of the second.
A vocabulary Vt2 is observationally at least as successful as Vt I if all accepted
observational generalizations and theories and all accepted sequences of observational theories of increasing (observational) success of Vtl also belong to
Vt2. Similarly, Vt2 is, in addition, referentially at least as successful as Vtl if
(in addition) all accepted reference claims of VtI2 (hence including possible
extra claims that were only accepted due to the fusion of vocabularies) are
reference claims of Vt2 and all rejected reference claims of Vt2 are reference
claims of Vtl. Finally, Vt2 is theoretically at least as successful as Vtl if, on
the level of accepted reference claims, the accepted hypotheses and theories,
and the accepted sequences of theories of increasing (observational) success of
Vt I also belong to Vt2.
Finally, program P2 is observationally/referentially/theoretically at least as
successful as PI if the corresponding vocabulary Vt(P2) is at least as successful
as Vt(Pl) in the relevant sense and if C(P2) is at least as successful as C(Pl)
235
in the relevant sense and if all violations of C(P2) of the successes of Vt(P2)
and Vt(Pl) are also violations of C(Pl).
The extended comparative success evaluation of theories, programs and vocabularies can be used to refine the general principle of improvement of theories
(PI), and the special principles of improvement, viz., improvement guided by
a research program (PIRP), or just by a (core) vocabulary (PICV). Recall that
PIRP and PIC V include something like a principle of improvement of programs
and a principle of improvement of vocabularies, respectively (where the
improvement of programs is, in the first instance, guided by a vocabulary).
Hence, these principles will then also be refined.
In Subsection 9.3.1. we studied rules of inference restricted not only to a fixed
domain, but also to a fixed vocabulary. It is tempting to try to extend IBRCCRT to different vocabularies and research programs for the same domain.
Above we have suggested definitions for the claim that one vocabulary is more
successful than another. Hence, it is clear that something like 'inference to the
best vocabulary as the closest to the (united) referential truth' can be formulated,
but we will not elaborate such a definition. Similarly, since we have suggested
definitions for the claim that one research program is more successful than
another, something like 'inference to the best program as the closest to the
truth' seems possible.
9.3.5. Explanatory research programs
One other question can now be answered. Recall that we explicated at the end
of Chapter 7 the nature of descriptive and, more specifically, inductive research
programs, and that we had to postpone the explication of the nature of
explanatory or theoretical programs to the present chapter dealing with stratified theories. An explanatory program mayor may not use a theoretical
vocabulary. Even (nomic) empiricists can agree that it is directed at establishing
the true observational theory. If there are theoretical terms involved, the
referential realists will add that it is also directed at establishing the referential
truth. The theory realist will add to this that it is even directed at establishing
the theoretical truth. To be sure, aiming at the referential truth and the theoretical truth implies aiming at the substantial truth. Scientists working within such
a program will do so by proposing theories respecting the hard core as long
as possible, but hopefully not at any price. They will HDevaluate these theories
separately and comparatively. RS directs theory choice. Although that rule is
demonstrably functional for all four distinguished kinds of truth approximation,
it cannot guarantee, even assuming correct data, a step in the direction of the
relevant truth. Though the basic notions of successfulness and truthlikeness
are sufficient to give the above characterization of the typical features of
explanatory research programs, they usually presuppose refined means of comparison, which will be motivated soon and elaborated in the next chapter.
236
237
conclusion that being true of a theory is not very important in the context of
designing better and better theories. A bad reason, however, is rejecting the
idea to try to approach the substantial or theoretical truth. For, whether the
referential realist likes it or not, in view of the Forward and the Upward
Theorems, the application of RS is functional for truth approximation on the
corresponding level. In this light, it seems wiser to use, without reservation,
the heuristic of the theory realist: the aim is to approach the strongest true
theory. This not only enables an explanation of observational truth approximation, but also gives rise to deeper explanations of success differences.
Empiricists and referential realists who are not convinced by these arguments
in favor of the theory realist heuristic, but subscribe to the view, as most of
them seem to do, that (terms do or do not refer and) that hypotheses are true
or false, whether we can test such claims in a straightforward sense or not,
should realize the following. If one subscribes, in addition, to the view that
HDtesting of a hypothesis pertains to the truth question of that hypothesis in
one way or another, and hence that it is functional for answering this question,
it immediately follows that comparative HDevaluation is functional, not only
for observational, but also for theoretical and substantial truth approximation.
The reason is that comparative HDevaluation is, or at least can be reconstructed as, testing the relevant comparative truth approximation hypothesis.
To deny this, one has to claim that HDtesting is not at all relevant, let alone
informative, about the truth question of the theoretical surplus claim of the
hypothesis under consideration. This is really far from scientific practice, in
particular, when one realizes that time and again theories have been accepted
as true on the basis of the available evidence. Amongst other things, scientists
extend in this way their observational power. Of course, all this does not
exclude that later research forced the conclusion that such theories were, after
all, false.
The foregoing considerations essentially presuppose occasions where RS can
be applied. However, the applicability of RS is, in general, exceptional, for
success is usually divided. For this reason, it is important to note that the
realist heuristic has other significant advantages over the empiricist and referential realist position. This is particularly the case when theory Y is explanatorily
better than X, in the sense that the general successes of Y exceed those of X,
whereas Y has some extra counterexamples. As has been explained, such extra
counterexamples of Y may always be relativized. That they are no counterexample of X may only be accidental, i.e., it may be due to accidentally
observationally correct models of X. In that case, and only in that case, Y may
still be closer to the theoretical truth than X, even though n Y is only externally,
but not internally, closer to the observational truth than nX. This further
relativization of the role of falsification within the evaluation methodology is
not only unavailable for the empiricist; it is also unavailable for the referential realist.
In sum, there are good reasons for the instrumentalist to become constructive
238
Concluding remarks
We conclude by briefly extending the analysis of the foundational, correspondence, and dialectical intuitions scientists and philosophers of Chapter 8 to
stratified truthlikeness and truth approximation. Moreover, we will recapitulate
the main reason for a further refinement of the analysis, in order to cover reallife scientific examples, in particular examples based on idealization and
concretization.
In Section 8.1. concerning foundations, there remained the question of
whether a dual foundation can be given for 'closer to the observational truth'.
Given that the basic definitions of observational, substantial and theoretical
truthlikeness all have a formally similar internal and an external clause, they
can all obtain a dual foundation, for the external clause can always be transformed into a plausible clause in terms of sets of sets of conceptual possibilities,
representing laws and general hypotheses on the relevant level. For the notion
of referential truthlikeness, the notion of a dual or a uniform (model or consequence) foundation does not make sense. It is essentially an application of the
definition of actual truthlikeness for propositions. To be sure, this definition
uses the same underlying intuition in terms of good and bad parts as that from
which Popper apparently started the study of truthlikeness, but then with an
unfortunate specification.
In Section 8.2. we gave intralevel explications of correspondence intuitions
concerning actual and nomic truth and truthlikeness. It is not difficult to see
that the actualist explications can only be applied to referential truth and
referential truthlikeness, whereas the nomic explications can be applied to
239
240
PART IV
INTRODUCTION TO PART IV
In this last part we will complete the study of truthlikeness and truth approximation by introducing a second major sophistication accounting for a fundamental feature of most theory improvement, viz., new theories introduce new
mistakes, but mistakes that are in some way less problematic than the mistakes
they replace.
Chapter 10 introduces this refinement in a qualitative way by taking into
account that one incorrect model may be more similar, or 'more structurelike'
to a target model than another. This leads to refined versions of nomic truth likeness and truth approximation, with adapted conceptual foundations. It is
argued, and illustrated by the Law of Van der Waals, that the frequently and
variously applied method of 'idealization and successive, in particular, double
concretization' is a special kind of (potential) refined nomic truth approximation. Combining both sophistications, one obtains explications of stratified
refined nomic truthlikeness and truth approximation.
Chapter 11 illustrates by two sophisticated examples that the final analysis
pertains to reallife, theoryoriented, empirical science. The first example shows
that the successive theories of the atom, called 'the old quantum theory', viz.,
the theories of Rutherford, Bohr, and Sommerfeld, are such that Bohr's theory
is closer to Sommerfeld's than Rutherford's. Here, Bohr's theory is a (quantum)
specialization of Rutherford's theory, whereas Sommerfeld's is a (relativistic)
concretization of Bohr's theory. This guarantees that the nomic truth, if not
caught by the theory of Sommerfeld itself, could have been a concretization of
the latter. In both cases, Sommerfeld would have come closer to the truth than
Bohr and Rutherford. The second example illustrates a nonempirical use of
the idealization and concretization methodology, viz., aiming at (approaching)
a provable interesting truth. In particular, it is shown that the theory of the
capital structure of firms of Modigliani and Miller is closer to a provable
interesting truth than the original theory of Kraus and Litzenberger, of which
the former is a double concretization.
In Chapter 12 the prospects for quantitative versions of actual and nomic
truth likeness are investigated. In the nomic refined case there are essentially
two different ways of corresponding quantitative truth approximation, a nonprobabilistic one, in the line of the qualitative evaluation methodology, and a
probabilistic one, in which the truthlikeness of theories is estimated on the
basis of a suitable probability function. As already stressed in Chapter 3,
243
244
INTRODUCTION TO PART IV
10
REFINEMENT OF NOMIC TRUTH
APPROXIMATION
Introduction
In Chapter 7 we presented the basic definition of nomic truthlikeness 1 of
theories in terms of the structuralist approach to theories. It surpassed first of
all the main problem of Popper's original definition, which was inadequate,
for it did not leave room for false theories, i.e., theories having actual mistakes,
i.e., real counterexamples. The basic definition was, moreover, attractive in
other conceptual, logical and methodological respects, as we have shown in
Part III. Moreover, for true theories, the basic definition captures 'scientific
common sense' and practice. Improving a true theory, in a cautious way,
amounts to strengthening it without making it false, i.e., without loosing correct
models. Hence, in perfect agreement with the basic definition, improving a true
theory is primarily a matter of dropping mistaken models, without introducing
new ones.
However, for improving false theories, the basic definition is rather naive.
Although that definition, in contrast to Popper's, leaves room for improving a
false theory by another false theory, 'basic improvement' can only be a matter
of dropping mistaken models and adding correct ones, or, in short, a matter
of replacing mistaken models by correct ones. Hence, improvement by replacing
mistaken models by other mistaken models, but better ones, is out of the
question: all mistaken models are equally bad. This certainly is not in agreement
with scientific practice and common sense. For this reason, the basic definition
does not have many real life scientific examples. In cases of scientific progress
a false theory is usually replaced by another false theory, but a better one, in
the sense that mistaken models are replaced by models which are also mistaken,
but less so. A paradigmatic case is the concretization of an idealized theory.
The indicated problem with the basic definition becomes particularly telling
when the truth is assumed to be complete. As Oddie (1981) has aptly remarked,
it then is child's play to approach the truth when we have a false theory at our
disposal, for then (as is easy to check by consulting Lemma 2 and (ii) of
Section 8.1.) we only have to strengthen it 2 . Although this is rather plausible
from the basic perspective, strengthening may well be a matter of dropping
relatively less mistaken models, in which case the child's play should be blocked
from the refined perspective of better and worse models.
245
246
A refined definition of truthlikeness should hence account for, not only, real
counterexamples but also for the fact that one mistaken model may be more
similar to a required model than another. For then there is room for improving
a theory by introducing new, but less mistaken models. Of course, a refined
definition should reduce to the basic definition under the relevant assumptions.
Finally, it should retain the attractive logical and methodological features of
the basic definition.
The structuralist approach to nomic truth likeness is particularly useful for
the suggested refinement, for it is plausible to base it on a postulated underlying
notion of structurelikeness. We start in Section 10.1. with some general constraints for the notion of structurelikeness, and hence for the notion of truthlikeness of structure descriptions. A number of specific examples, already indicated
in Chapter 7 in the context of actual truthlikeness and truth approximation,
are now studied in some more detail. In Section 10.2. the refined definition of
truthlikeness of theories is presented, based on the notion of structurelikeness,
using a sophisticated version of the conceptual justification for the basic definition. It will be shown that refined versions of all merits of the basic definition
follow. Section 10.3 shows that a refined version of the dual foundation of
truthlikeness and truth approximation is possible. In Section 10.4 it is pointed
out that 'idealization and concretization' is a special kind of potentially refined
truth approximation. This is illustrated by Van der Waals's theory of ideal
gases. Moreover, it is indicated how idealization and concretization can function
as a strategy in validity research around 'interesting theorems'. Finally, in
Section 10.5. we show that stratification of refined truthlikeness and truth
approximation is perfectly possible, leading to refined versions of the Projection
and Upward Theorems
10. 1. STRUCTURELIKENESS
Introduction
247
248
249
Introduction
It is clear that the basic definition of nomic truthlikeness of theories does not
exploit the idea that one structure may be more similar to a second than a
third, i.e., the idea of an underlying notion of structurelikeness. Let us assume
that there is such an underlying ternary relation of structurelikeness s(x, y, z),
250
stating that y is at least as similar to z as x, and satisfying the minimal sconditions introduced in Section 10.1.: being centered, centering (together:
s(x, y, x) if and only if x = y) and conditional left and right reflexivity (s(x, y, z)
implies e.g., s(x, x, y) and s(y, z, z)). Recall finally that x and z are said to be
related or comparable, indicated by r(x, z), iff there is a y between x and z, that
is, iff there is a y such that s(x, y, z). Subset X of Mp(V) is convex when it
contains all y for which there are x and z in x such that s(x, y, z).
In this chapter 'models' and 'consequences' are always smodels and sconsequences in the sense of Section 8.1. That is, if X and Yare subsets of
Mp(V) then x in X is a model of X and Y is a consequence of X iff X is a
subset of Y.
This clause, which may be plausible in itself, will be elucidated in the next
section. It is easy to check that (Rii) implies (Bii), due to the first minimal scondition, and that, for the same reason, it even reduces to (Bii) when X is
true, i.e., when T is a subset of X. Hence, it is a strengthening of the corresponding basic clause for false theories. Moreover, it is easy to see that it nicely
reduces to the basic clause when the underlying structurelikeness is trivial, i.e.,
when s(x, y, z) iff x = y = Z.5
The second refined clause, the refined external clause, is not a strengthening
but a weakening of the corresponding basic one, which required that
Y  (X U T) was empty. As suggested before, in the context of (nontrivial)
structurelikeness, for improving false theories, it is plausible to leave room for
members of Y  (X U T), i.e., extra mistaken models of Y.
(Ri) for all y in Y (XU T)
there are x in X  T and z in T  X such that s(x, y, z)
(Ri) requires that the extra mistakes are between X  T and T  X, that is, all
y in Y  X U Thave to be between a member of X  T and one of T  X. This
clause guarantees, as it were, that Y is moving up from X to T, without detour.
Note that (Ri) reduces to the basic clause (Bi) when X is true, for in that case,
there cannot be members in T  X, hence there is no room for extra mistaken
models of Y. Note, moreover, that (Ri) reduces to (Bi) when the underlying
structurelikeness is trivial.
In sum, the resulting definition of refined nomic truthlikeness is the conjunction (Ri)&(Rii), indicated by MTL,(X, Y, T). The strict version
MTL, + (X, Y, T) is again defined by imposing in addition that MTL,(Y, X, T)
251
does not obtain. Moreover, both definitions reduce to the corresponding basic
truthlikeness definitions when X is true and/or when s is trivial. It is easy to
check that (Ri) and (Rii), like (Bi) and (Bii), can be read as dealing with all
nonnomic possibilities and all nomic possibilities, respectively. E.g., (Ri) may
start as follows: "for all y not in T, if y belongs to Y  X, then .... " Hence,
their names, external and internal clause, respectively, are again plausible. In
the next section we deal with the intuitive foundations of this definition.
Figure 10.1. represents the resulting situation, assuming that being on a
horizontal line corresponds to being related. Arrows indicate existential claims.
Whereas x', y' and z' are only supposed to be on a horizontal line in X, Y, and
T, respectively, x, y and z are supposed to belong more particularly to X  T,
Y  (X U T), and T  X, respectively. Note that the emptiness of X n T  Y,
corresponding to 1, see below, has already been built into the figure.
Mp(V)
(Ri)
(Rii)
Figure 10.1. Refined truthlikeness: Y is closer to the truth T in the refined sense than X
252
Mp(V)
(
z
253
guaranteed by the refined definition. In Section 10.4. we will see that theorylikeness based on (antisymmetric) concretization of structures provides an antisymmetric example.
Turning to noncentral symmetry notions, left antisymmetry is, in view of
the possibility of sequences of theories straightforwardly converging to the
truth, the most interesting notion. Under certain, strong, conditions theorylikeness is left antisymmetric: viz., when structurelikeness is (de )composable,
defined by s(x, y, z) if and only if r(x, y) and r(y, z), and when X and Yare
convex, then MTLr(X, Y, Z) and MTLr(Y, X, Z) imply X = Y. That s is decomposable in the sense that s(x, y, z) implies r(x, y) and r(y, z) follows immediately
from conditional reflexivity and the definition of r. That s is composable in the
sense that r(x, y) and r(y, z) together imply s(x, y, z) is a substantial condition.
But, as we will see in Section 10.4., it is trivially satisfied by the ternary relation
of concretization. Since the proof of the claimed left antisymmetry seems,
relatively speaking, the most difficult one in the present section, we will give it.
254
There is one property of MTL which is not at all shared by MTL r, viz.,
specularity: MTLr(X, Y, Z) does not generally imply MTLr(X, Y, Z). This is
directly related to the fact that MTL deals with internal and external mistakes
in essentially the same way, whereas MTLr introduces a fundamental asymmetry between these kinds of mistakes.7
10.2.2. Objections and their evaluation
255
0
~
y
T
Figure 10.3. Case 1 on the left. Case I with T blown up on the right
A first reaction might be that the type of similarity relation Sd seems, due to
its purely quantitative background, not in the spirit of our qualitative approach.
The refined claim is blocked as soon as we impose a plausible qualitative
restriction on Sd leading to the relation defined in Subsection 10.1.2., indicated
by S2. That is, let (x2, y2) be at least as similar to (x3, y3) as (x 1, y1) iff
xl::; x2::; x3 or x3 ::; x2::; xl and y1 ::; y2 ::; y3 or y3::; y2::; y1. It is easy to
check that this condition implies that sdxl, yl), (x2, y2), (x3, y3 holds, but
that the counterintuitive case 1 is now excluded.
However, S2 has similar counterintuitive cases. Case 2: let T be the (unit)
(1, 5)square, and let X be the (1, 1)square. Then consider the rectangular
Y formed by the [0, I)interval on the Xaxis and the [0, 3]interval on the
Yaxis, hence the union of the (1,1), the (1,2) and the (1, 3)square. See
the left side of Figure 10.4. (again without coordinates). Again, X is closer to
T than Y in the basic sense, whereas Y is (not only as close to, but even) closer
256
Figure 10.4. Case 2 on the left. Case 2 with T blown up on the right
to T than X in the refined sense. This may seem again counterintuitive since
Y is a strong weakenting of X.
Hence, the cause of the trouble is not Sd as such, but that 'quantitativelyinduced' similarity relations seem to permit, in general, easy ways of truth
approximation by weakening of a theory to a theory which is much weaker
than T itself. We may well exclude such counterintuitive cases by the extra
condition that Y should be in strength between X and T: the strong boundary
condition. However, although it would be effective for excluding all convincing counterexamples to follow in this section, as already indicated in
Subsection 8.2.2., this might well exclude interesting cases of weakening followed
by strengthening or vice versa 8 .
To prevent such cases and to get a systematic connection between the basic
and refined approaches, it is more plausible to impose a somewhat weaker
condition. That is, the content of Y should be not larger than the content of
the union of X and T and not smaller than the content of the intersection of
X and T. Assuming that there is a quantitative measure m of the content of
theories, this boundary condition amounts to m(X n T).::;; m(Y)'::;; m(X U T) .
Note that this is a direct implication of the basic definition of comparative
truthlikeness: X n T <:; Y <:; X U T, since any measure function respects, by definition, settheoretic inclusion.
In the context of the two counterexamples it is plausible to measure the
content in unit squares, which leads for case 1 to m(X) = 1, m(Y) = 4, m(T) = 1,
m(X n T) = 0 and m(X U T) = 2, and hence to violation of the condition; for
case 2 it leads to m(X) = 1, m(Y) = 3, m(T) = 1, m(X n T) = 0 and m(X U T) = 2,
hence, again violated by the condition. In sum, adding the boundary condition
to the refined definition excludes the two counterintuitive cases. Hence, this
applies to the strong version, rejected for being too strong.
It is plausible to require that quantitative definitions of basic and refined
truthlikeness satisfy the boundary condition, and one mayor may not, depending on one's purposes, impose it for qualitative definitions. It is important to
note that the boundary condition is automatically fulfilled by a paradigmatic
case of refined truth approximation, that is, double concretization, to be dealt
with in Section 10.4. As we will see, when Y is a concretization of X, and T of
257
Y, then Y is not only closer to T than X in the refined sense, but Y also satisfies
the boundary condition, simply due to the onemany character of the concretization relation (including of a oneone relation as an extreme case). We only
have to assume that X and T do not overlap, which is the normal situation
for concretization. Similarly for the other paradigmatic pattern, specialization,
followed by (nonoverlapping) concretization. However, wherever the boundary
condition makes a difference, a quantitative operationalization is required.
Mormann (1997) has objected to this extra condition that one would get
back similar counterexamples as before, by 'blowing up' T For example, if
you blow up T in case 1 to the square with sides of 3 units of length, keeping
the center the same, see the right side of Figure 10.3., you get m(X) = 1, m(Y) =
4, m(T) = 9, m(X n T) = and m(X U T) = m(X) + m(T) = 10, hence obeying
the boundary condition. However, now it is highly doubtful whether everybody
remains to see it as a counterintuitive case of the claim that Y is closer to T
than X . Note that Y now even satisfies the strong boundary condition.
Similarly, if you blow up T in case 2 to the rectangle formed by the [0, 1]interval on the Xaxis and the [4, 7]interval on the Yaxis, see the right side
of Figure lOA., you get m(X) = 1, m(Y) = 3, m(T) = 3, m(X n T) = and
m(X U T) = m(X) + m(T) = 4, hence, also obeying the boundary condition,
again even the strong one. However, again you get a case which is not so
obviously counterintuitive.
It remains to concede that we introduce by the boundary condition a
quantitative condition that seems to depart from the qualitative approach.
However, the fundamental difference between a refined qualitative and a refined
quantitative approach (Chapter 12) is that the latter presupposes a distance
function between structures. Below we will expose our reservations about such
distances. But first we want to elaborate the claim that the qualitative approach
should not be conceived as an approach that wants to exclude the use of
quantitative aspects of the relevant structures. Second, we like to stress that
although it will not always be easy to operationalize the required content
measure, the prospects are not so bad as Mormann seems to suggest.
That we do not want to exclude the use of quantitative features is clear from
our favorite, indeed paradigmatic type and examples of refined comparative
truthlikeness based on concretization, viz., the case where comparative structurelikeness amounts to 'double concretization', e.g., the law of Van der Waals
(Section lOA.), or to 'specialization, followed by concretization', e.g., the old
quantum theory (Section 11.1). We consider such examples as testcases for a
refined definition. Although concretization almost by definition exploits quantitative features of structures, it does not at all presuppose distances between
structures.
Regarding the operationalization of the content measure, it is clear that in
a purely settheoretic context, with finite sets, it is plausible to take the number
of elements in the relevant sets. In the twodimensional geometric context of
the two examples it was plausible to take the areas of the relevant sets. As
258
Mormann rightly remarks, the area suggestion for the twodimensional case
does not work for rational numbers. However, we are not so sure as Mormann
that it is in many or most infinite cases impossible to define some reasonable
content measure of theories. On the contrary: many measurefunctions, in the
technical sense of measure theory, on the set of socalled measurable subsets
of the relevant set of structures seem to do the job. Hence, in general, the
prospects for such a measure are less problematic than Mormann suggests.
Let us therefore turn to the fundamental problem of a quantitative approach,
the use of a distance function between structures. Although Niiniluoto's proposals for definitions of quantitative truthlikeness based on distances between
structures (to be presented in Chapter 12) are impressive, the problem is that
apart from some exceptional cases, see below, there usually is nothing like a
natural realvalued distance function between the structures of scientific theories, let alone something like a quantitative comparison of theories based on
such a distancefunction. And even in cases where it is technically easy to define
a distance function, as in some paradigm examples of the structuralist approach,
e.g., between the potential models of classical particle mechanics (CPM), as far
as they share the same domain and time interval, such a distance function
seems never to be used by scientists. The main reason obviously is that as soon
as there are two or more realvalued functions involved indicating quite different
quantities, and hence expressed in quite different units of measurement, any
'overall' distance function has equally many fundamentally arbitrary aspects.
For we have to add in one way or another functions with values in meters (m)
for position, kilograms (kg) for mass and newtons (kg.m/sec 2 ) for force. One
should not be misled by the fact that scientists frequently compare models
quantitatively, even in terms of distances. However, what they do in such cases
is quantitatively comparing one (type of) function, hence one aspect of such
models, but that is not at stake. If we want to compare e.g., classical particle
mechanics with special or general relativity theory we have to take full potential
models into account.
As we already hinted upon, there are exceptions to the rule that distances
between structures are out of order, for instance, in the case the theories to be
compared just amount to probability vectors. Roberto Festa (1993, 1995) has
convincingly shown that in such cases truth approximation can take a quantitative form, even such that systems of inductive probability may be more or less
optimal as strategies of truth approximation. However, it will also be argued
that this primarily concerns improper cases of nomic truth approximation, that
is, it amounts to refined actual truth approximation. Be this as it may,
Niiniluoto (1998, p. 12) is certainly right when he suggests that there may be
examples of intuitive truth approximation that require a quantitative approach.
He gives two graphical examples, of which the first is more convincing than
the second. A number version of the first example is the claim that
{<O, 1), (100, I)} is closer to {(3, 1), (100,0)} than {<O, 2), (100, 2)}, which
is not captured by the refined definition, but it also does not allow the opposite
259
claim: to get it right, it may well necessary to use a quantitative truth likeness
measure based on realvalued distances between structures. A number version
of his second example is the claim that {I 0 1, 199} is closer to the truth
{l00,200} than {l01, 150, 199}, whereas the refined definition also applies in
the other direction, which is not excluded by the boundary condition (although
it is excluded by the strong version). This example is less convincing, since it
may well be excluded by some defensible strengthening of (Ri). However this
may be, occasional need of a quantitative approach is one thing, the feasibility
of a meaningful quantitative approach in all cases is another.
Accordingly, in general, we cannot count on conceptually plausible distance
functions for proper cases of nomic truth approximation. For this reason, an
incommensurability reason of sorts, we focus on qualitative or, more precisely
directlycomparative definitions of truthlikeness, which do not essentially depend
on distances between structures. 9 Such definitions may be criticized for not
rightly classifying clear positive cases, leaving room for defensible liberalizations; as long as they do not wrongly classify clear negative cases as positive
cases, they are defensible as a cautious point of departure.
In this respect, Zwart (1998, Ch. (6) has proposed an interesting way to
combine, what he calls, the content and the likeness approach, for propositional
languages. Whereas our basic and refined definition both are 'truthvalue
dependent', that is, only a true (false) theory can be closer to the truth than
another true (false) theory, his definition of refined verisimilitude leaves room
for the possibility that a false theory is closer to the truth than a true one:
e.g., {p&q&'r} (false) is closer to the truth {p&q&r} than {pvqvr} (true).
Unfortunately, his proposal is technically rather complex and it seems unlikely
that it can straightforwardly be generalized to e.g., first order languages.
However, using his main intuitions for a liberalization of our refined definition
seems feasible.
For the time being, the refined definition, together with the boundary condition, is useful as a sufficient condition. Recall that the boundary condition is
implied by the basic definition and that it excludes the foregoing counterintuitive cases, suggested by Mormann, of prima facie truth approximation by
strong weakening of the theory. After all, the decisive objection to the basic
definition was not a matter of allowing truth approximation by relatively weak
theories, but a matter of, starting from a false theory, excluding truth approximation by another false theory introducing 'less mistaken' models. By the
suggested combination of criteria, we bring two desiderata together, with the
consequence that the counterintuitive examples are excluded, whereas the 'T
blown up cases' are not, without having to exclude structurelikeness notions
defined with the aid of distancefunctions between structures, such as Sd and
S2. To be sure, the comparative approach becomes in this way far from merely
qualitative. What really matters is that the combination of criteria does still
not depend on distances between structures, for the required notion of structurelikeness may well be defined without using such (overall) distances. What surely
260
An important question is whether a plausible refined definition of 'more successful' can also be given, such that the corresponding rule of success is functional
for approaching the truth in the refined sense. The answer to this question is
positive; to show this, it is crucial to prove the Refined Success Theorem which
states that the adapted TA(truth approximation)hypothesis and the
CD (correct data)hypothesis can explain that one theory is more successful
than another in the refined sense.
Recall that R indicates the set of accepted instances at a certain time (not
made explicit) and S the strongest accepted law and that the CDhypothesis
guarantees that R is a subset of T and T of S. We will first give the refined
definition of "theory Y is, relative to simply R jS, at least as successful as theory
X", indicated by MSFr(X, Y, R/S), and then paraphrase the clauses:
(Ri)S
for all y in Y  (X U S)
there are x in X  Sand z in S  X such that s(x, y, z)
(Rii)R
Note first the strong analogy between this definition and the refined definition
of truthlikeness. T has been replaced by Sand R, respectively, in a systematic
way. The second clause says that Y represents R at least as well as X. Hence,
Y may be said to represent R at least as well as X. The first clause states that
261
is again functional for approaching the truth in the sense that the chosen theory
may still be closer to the truth (which would explain its being at least as
successful) and that the rejected theory cannot be closer to the truth (for
otherwise it would not be less successful). The Refined Forward Theorem now
states that, if the vocabulary is observational and if the refined version of the
Comparative Success Hypothesis (CSH r) "Y remains more successful in the
refined sense than X" is true, then "Y is closer to To in the refined sense than
X". In other words, if Y is not closer to To than X, (further) testing of CSH r
will sooner or later lead to falsification of CSH r. In Subsection 10.3.2. it will
262
become clear that for this falsification it is, in contrast to the falsification of
the basic version of CSH, now not enough that Y acquires an extra counterexample or X an extra success, but the counterexample should be 'redundant'
and the success should be 'relevant', in the sense defined in that section.
The argumentation that RS, is functional for refined truth approximation
can now be extended on the basis of the Refined Forward Theorem along
similar lines as in the basic case. It is also easy to check that the adapted
versions of the heuristic principles discussed in the basic case (Subsection 7.3.4.),
viz., the Principles of Separate and Comparative Evaluation, the Principle of
Content and the Principle of Dialectics, are in their turn functional for the
Refined Rule of Success, in the sense that they stimulate new applications of
the latter.
Introduction
Recall that (Bi) could be interpreted as "all mistaken models of Yare (mistaken)
models of X" or as "all true consequences of X are (true) consequences of Y".
On the other hand, (Bii) could be interpreted as "all correct models of X are
(correct) models of Y" or as "all strongly false consequences of Y are (strongly
false) consequences of X". From these 2 x 2 alternatives it was possible to
construe two uniform foundations, a model and a consequence one, and formally two, but only one very plausible dual foundation of the basic definition.
It will turn out that in the refined case too it is possible to give a uniform
model foundation, but a consequence foundation does not seem possible.
However, a dual foundation is again possible, and it can now be extended to
the refined methodology, which makes it once more the most appealing one.
10.3.1. Intuitive foundations of refined truth likeness
M.PIr
263
of course, be improved, it can only be retained. Hence, we then get back: all
correct models of X are (correct) models of Y.
(Ri) can, similarly, be seen as a concretization of the modelinterpretation
of the negative intuition. It is plausible to call a new mistaken model of Y
redundant if it does not improve upon a mistaken model of X with respect to
some still missing target structure, i.e., a member of T  X. In this way, (Ri)
explicates the intuition:
M.NIr
Hence, in sum, the refined definition has an intuitive model foundation, roughly:
improving relevant models (M.PIr) while avoiding redundant models (M.Nlr).
As mentioned above, the first clause (Bi) of the basic definition could also
be interpreted as explicating the intuition "all true consequences of X are (true)
consequences of Y". It will be shown that, on the level of sets of structures,
(Ri) has a similar, conceptually attractive, interpretation. It is plausible to look
for a transition to the claim "all relevant consequences of X are improved by
Y" in the weak sense that they are retained if not strengthened by Y. Whatever
the meaning of 'relevant consequence', this will amount to the claim that "all
relevant consequences of X are consequences of Y". Hence, the remaining
question is: what are 'relevant consequences'?
Now (the set of models of) a true consequence of X, i.e., a set of which T
as well as X are subsets, mayor may not exclude mistaken models of Y that
improve mistaken models of X. If it does, it cannot be a consequence of Y.
Hence it is plausible to take only true consequences of X into consideration
that do not exclude, hence include, such mistaken models excluded by X but
improving mistaken models of X.
To elucidate this idea, let us first define the set of all such structures, to be
called 'the bridge between X and T':
B(X, T) = {y not in XU TI
there are x in X  T and z in T  X such that s(x, y, z)}
It is easy to check that (Ri) is on the level of sets equivalent to: Y is a subset
of X U B(X, T) U T, and hence, on the level of sets of sets, to
(RiC)
264
C.PIr
For this is precisely what (RiC) states in a formal manner. Another way of
reading (RiC) is that Ys moving up from X to T without detour means that
it goes without losing relevant consequences of X by going over the bridge
B(X, T)Y
Hence, from the consequenceanalysis of the second clause, we may conclude
that the definition of refined truthlikeness, based on (Rii) and (Ri), or, equivalently, (RiC), may also be paraphrased by "improving relevant models (M.PIr)
while saving relevant consequences (C.PIr)", that is, we have not only an
intuitively appealing refined version of the model foundation of truthlikeness,
but also of the dual foundation.
We did not find any, let alone plausible, let alone intuitively appealing,
phrasing of (Rii) in terms of strongly false consequences. Hence, a refined
version of the consequence foundation for truthlikeness does not even seem
to exist.
In sum, we dealt with the fact that the basic definition of 'more truthlike' is
naive because it only leaves room for the improvement of a false theory by
dropping mistaken models and adding correct ones, whereas in scientific practice, false theories are frequently improved by replacing mistaken models by
other mistaken ones, which are, however, less mistaken. It was shown that the
structuralistrefined definition of ' more truthlike' can be based on an, intuitively
appealing, concretization of the model foundation: improving relevant models
while avoiding redundant models, as well as on a similarly appealing concretization of the dual foundation: improving relevant models while saving relevant
consequences. However, a refined consequence foundation could not be
delivered.
10.3.2. Intuitive foundations of refined methodology
Recall that we argued in Section 8.1. that the basic methodology of truth
approximation, viz., HDevaluation combined with the basic rule of success
for theory selection, had an intuitive dual foundation in terms of successes and
counterexamples, which is directly suggested by the HDmethod. That is, the
methodologically crucial notion 'more successful' is based on the intuition:
more successes and fewer counterexamples. Though a model foundation could
also be given, it was hardly plausible. On the other hand, a plausible consequence foundation could be given for HDresults and the basic rule of success.
From the refined perspective it is first of all important to note, contrary to
what one might expect, that the results of HDevaluation, and hence their
foundation, remain the same. That is, refinement of methodology is not a
matter of refinement of HDresults, but a matter of refinement of the comparison, and hence of the rule of success.
Hence, let us review the three possible foundations of HDresults. According
265
to the intuitive dual foundation, separate HDevaluation leads to counterexamples and successes. According to the model foundation of HDresults,
HDevaluation should lead to, besides counterexamples, established rightly
missing models. Though the notion of an established rightly missing model is
available, it is not plausible, let alone intuitively appealing. Finally, according
to the consequence foundation of HDresults, HDevaluation leads to, besides
successes, established strongly false consequences. Though not intuitively
appealing, the latter notion is conceptually plausible, because to be established
as a strongly false consequence 'only' requires that its complement set is a set
of established counterexamples.
For the foundations of refined methodology, recall the definition of 'being
at least as successful' in the refined sense:
(Ri)S
for all y in Y  (X U S)
there are x in X  Sand z in S  X such that sex, y, z)
(Rii)R
266
267
notion 'more successful', in terms of saving relevant successes and accommodating counterexamples. Finally, according to the resulting dual foundation of
'refined truth approximation', 'more truthlike' can be taken in the refined sense
of saving relevant consequences, while improving relevant models. In this
way we obtain a dual account of qualitative truth approximation by the
HDmethod, which is very close to scientific common sense.
Foundations
Model
Dual
Consequence
More truthlike
basic
refined
intuitive
intuitive
intuitive
intuitive
plausible
impossible?
More successful
basic
refined
available
available
intuitive
intuitive
plausible
impossible?
available
intuitive
plausible
HDresults
268
Introduction
Idealization and concretization is an important strategy for program development and there is a the related strategy of guidance by an interesting theorem
(see Kuipers, SiS). In this section we will study these strategies in some detail
from the point of view of truth approximation. From the general exposition of
refined truthlikeness it then trivially follows that concretization of theories can
be a truth approximation strategy. This will be illustrated by the transition of
the theory of ideal gases to that of Van der Waals. Then we will outline how
concretization is also an important strategy in the investigation of the domain
of validity of an interesting theorem and, in particular, whether it is true for
the actual or even the nomicallY possible worlds.
269
270
already been mentioned that CT(X, Y, Z) is antisymmetric (in the central sense)
as soon as the three sets are convex; hence CT** is an antisymmetric special
type of theorylikenes.
A direct consequence of the DCTheorem is that, if theory Y is a concretization of theory X, if Y is convex and mediating, and if the true set of nomic
possibilities T is a concretization of Y, then Y is closer to the truth than X.
This may be called the Truth Approximation by Double Concretization
(TADC)Corollary  a major goal of this section  viz., to show that and in
what sense concretization may be a form of truth approximation. All conditions
for truth approximation can be checked, except, of course, the crucial heuristic
hypothesis that T is a concretization of Y.
Let us first link the TADCcorollary with the explication of dialectical
concepts presented in Section 8.3. Recall that basic truthlikeness MTL(X, Y, T)
could be considered as the (nomic epistemological) explication of the idea of
dialectical negation, and that several other dialectical notions could be explicated in a similar way. Hence, it is plausible to claim that MTL,(X, Y, T) can
be seen as the refined explication of dialectical negation, and that the other
explications can be refined analogously. However, in Section 8.3. we could not
yet explicate the idea of 'dialectical correspondence' (Nowakowa 1974, 1994).
Since dialectical correspondence has certainly to do with concretization it is
now plausible to explicate the idea that Y dialectically corresponds with X by
MTLct(X, Y, T). The TADCcorollary specifies the conditions under which a
concretization Y of X dialectically corresponds with X and hence is closer to
the truth than X.
Returning to the TADCcorollary in general, to obtain 'good reasons' to
assume that the required heuristic hypothesis that T is a concretization of Y
is true, it is important that the concretization has some type of (necessarily
insufficient) justification, of a theoretical or empirical nature, suggesting that
the account of the new factor is in the proper direction. In this respect, it is
plausible to speak of theoretical and/or empirical concretization. The famous
case of Van der Waals to be presented in the next subsection evidently is a
case of theoretical concretization, followed by empirical support. The same
holds true for Sommerfeld's concretization of the 'old quantum theory' to be
presented in the next chapter.
10.4.2. Application to gas models
The transition from the theory of ideal gases to Van der Waals's theory of
gases has frequently been presented as a paradigmatic case of concretization.
The challenge of any sophisticated theory of truthlikeness hence is to show
that this transition can be a case of truth approximation.
For this purpose, we start by formulating the relevant models in elementary
structuralist terms. (S, n,P, V, T) is a potential gas model (PGM) if and only if
S represents a set of thermal states of n moles of a gas and P, V and Tare
271
272
The new result is that Y, which includes X, is also, like X, included in Val. Due
to concentricity of the basic and refined theorylikeness notions, it follows in
this case that MTL/ MTLr(X, Y, Val). The ultimate purpose of this type of
research was to find out whether T, or at least R, are subsets of Val. Of course,
the larger Val has been proven to be, as in the described case, the greater the
chance, informally speaking, that R or even T are subsets of Val. However,
simply enlarging the proven domain of validity does not necessarily go in the
direction of Rand T For this purpose, concretization is the standard strategy.
Let it first have been shown that X is a subset of Val, and later that a
concretization Y of X (CON(X, Y) , X need not be a subset of Y) is also a
subset of Val. It then trivially follows that MTLr(X, X U Y, Val). If, moreover,
Y is convex and mediating, it follows from the heuristic hypothesis that Y is a
concretization of T (CON(Y, T)), using the DCTheorem, that MTLr(X, Y, T) .
Hence, we have proved IT for a set Y which is more similar to T than X,
which increases the chance that IT holds for T, ipso facto for R.
A complex form of validity research concerns the case that IT is not fixed,
but that realistic factors are successively accounted for. Formally, this is also
a form of concretization. IT2 is called a concretization of ITt if Val(IT2) =
Val2 is a concretization of Val (ITt) = Vall.
Now suppose that ITt is proven for X. The relevant heuristic strategy is to
look for a concretization Y of X and a concretization IT2 of ITt such that
IT2 can be proved for Y. The heuristic hypotheses are that T is a concretization
of Y and that there is a concretization IT* of IT2 such that IT* holds for T
and hence for R. This makes sense because, if Y and IT2 are convex and
mediating, it not only follows that Y is closer to T than X, but also that Val2
is closer to Val* than Vall. Hence, in this case we are not only on the way to
T but also to IT*.
The concretization of the theory and corresponding theorem of Modigliani
and Miller concerning the capital structure of firms by Kraus and Litzenberg
turns out to be a perfect example of this kind of approximation of a provable
interesting truth (Cools, Hamminga and Kuipers 1994). It will be presented in
the next chapter.
Introduction
We will now investigate how the refined definition works out for theories that
are stratified in terms of a distinction between theoretical and (relatively) nontheoretical or observational components. The main question is again whether
and to what extent truthlikeness on the theoretical level (including theoretical
and observational components) is preserved on the observational level.
Recall that we distinguished three vocabularies: Vo for the observation terms,
273
Vr for Vo plus the referring theoretical terms, and Vt for Vr plus all nonreferring theoretical terms. By the Nomic Postulate these vocabularies were
supposed to generate subsets To, Tr and Tt of their corresponding sets of
conceptual possibilities, representing the corresponding sets of nomic
possi bili ties.
We concentrate on the question of whether truthlikeness is projected from
the tlevel onto the oIevel. Now s, refers to structurelikeness on the tlevel and
not to trivial structurelikeness. Let re indicate the projection function from
Mp(Vt) onto Mp(Vo). Recall also that and how the Truth Projection Postulate
(TPP(t,
telling that To = reTt was made plausible. It was first of all supposed
to hold between the rIevel and the oIevel (TPP(r, t. Between the t and
oIevel it was a direct consequence of the Fiction Reproduction Postulate
(FRP(t, r). The combination implies the overarching TPP(t,o). It will be
assumed from now on. Recall, finally, that it is plausible to assume that
R ; To ; S, the CDhypothesis, which will not be further mentioned.
In the refined setup, it is now also plausible to assume throughout that s
satisfies the sProjection Postulate: s,(x, y, z) implies so(re(x), re(y), re(z. Note,
contrary to what one might think at first sight, that the projection reX of a
convex set X need not be convex, nor the other way around.
Let us start again with the external clause. Let X and Y be subsets of Mp(Vt)
and let them satisfy (Ri) with regard to Tt, i.e.,
(Ri),
It is easy to check that this clause is projected on the oIevel, assuming TPP(t, 0)
and assuming that X and To is convex.
274
there are also two interesting stronger sufficient extra conditions, i.e., conditions
which guarantee the projection of the internal clause. We will consider all three
conditions in the order of decreasing strength, and we will argue in passing
that the further relativization of falsification we noted in the stratified basic
case can be extrapolated to the stratified refined case.
Let X and Y satisfy (Rii) with regard to Tt:
(Rii)t
Note that (AI), which trivially implies that all observational structures are also
comparable, is a rather strong condition. But there may well be cases where it
is satisfied: for instance, the case of propositional structures based on fixed
finite sets of elementary propositions formulated in observational and theoretical terms.
275
Note that (A3) reduces to the basic version RC(X) when ro and r, both are
based on trivial structurelikeness. In Chapter 9 we argued that RC(X) was not
a very restrictive condition on X. However, it is much less clear how restrictive
RC,(X) in fact is. In the case of concretization, it amounts to the claim that X
has no 'accidental' observational idealizations of members of Y. That is, a
theoretical idealization corresponds to any observational idealization. As in
the basic case, this at least means that X is on the right track, but now it is
much more farreaching, since the relation of (observational) concretization
applies, of course, much more frequently than that of perfect observational fit.
It is interesting, also for the basic version, to consider the nature of (A3) in
some detail. Let RC,(X, Z) indicate that X, being of the tlevel, is relatively
correct for Z with respect to the underlying olevel in the sense defined by
(A3). As we have seen in Subsection 10.1.1., it may well be that r is either an
equivalence relation or a partial ordering. It is not difficult to prove that
RC,(X, Z) is, in this case, also an equivalence relation or a partial ordering on
subsets of Mp(Vt), respectively. In the equivalence case, the theories considered
may all be relatively correct with respect to each other, in which case all or
none of them belong to the equivalence class associated with Tt. In the case
of a partial ordering, the theories considered may form an ordered sequence
which mayor may not end with Tt, but many of such paths will end in Tt. It
is easy to check that the basic general version of (A3), based on trivial structurelikeness, RC(X, Z) is an equivalence relation.
Although (A3) reduces to RC(X) when s is trivial, the two stronger sufficient
conditions, (Ai) and (A2) have no interesting analogue in the basic approach.
276
For, recall that trivial structurelikeness implies that two different structures are
always incomparable. As a consequence, (Ai) is excluded as soon as Mp(Vt)
contains more than one element and (A2) reduces to the condition that n is a
oneone function, which is, of course, an improper extreme case of stratification.
If the internal clause is projected for To, it follows also directly that n Y is
at least as successful with respect to R as nX. Hence, if Y is, with respect to
the internal clause, at least as close to Tt as X and if (Ai) or (A2) or (A3)
hold, then n Y is at least as successful with respect to R as nX.
In sum, the Projection Theorem can be refined: projection of refined truthlikeness on the level of theoretical terms is again guaranteed under some interesting
conditions, with the relevant success consequence as a corollary in combination
with the Refined Success Theorem.
It is evident that a similar story can be told about the relation between the
r and the olevel, and also, but more trivially, about the relation between the
t and the rIevel. As in the stratified basic case, the sufficient condition for
projection, RC,(X), is trivial between the t and the rIevel, since Tt contains
all conceivable expansions of Tr.
Again we also have a refined version of the Upward Theorem, now stating
that if n Y is closer to To in the refined sense than nX then y* is closer to Tt
in the refined sense than X*, where
X* =dejX  {x EX  Y/n(x) E nyn To}
U {XEX  YjlVzEn'To r(x,z)=>3YEXn Ys(x,y,z)}
and y* = dejX n Y Hence, y* is the same as in the basic case, whereas X* is
a further strengthening of its corresponding version in the basic case.
In view of the similarity between the basic and the refined version of the
Success/ Forward and Projection/ Upward Theorems we can argue that
HDevaluation and RS, are functional for observational, substantial and theoretical truth approximation in essentially the same way as in the basic case.
Since the refinement introduced in this chapter is basically a comparative
matter, the definitions of Ttreference and of the (total) referential claim of a
theory remain unchanged, and hence the analysis of referential truth approximation is not affected. In sum, the results of refinement and epistemological
stratification can be integrated.
Concluding remarks
In this chapter we have presented a conceptually plausible definition of refined
truthlikeness of theories based on an underlying notion of structurelikeness.
Taking into account the assumed fixed character of the vocabulary (hence of
the set of conceptual possibilities), it allows minimally the conclusion that
conceptually relative but otherwise objective truth approximation is possible
in a sophisticated sense, for example by concretization. Moreover, it justifies
277
11
EXAMPLES OF POTENTIAL TR UTH
APPROXIMATION
Introduction
In the previous chapter we showed that the refined definition has real life
scientific examples, viz., by pointing out that the Law of van der Waals provides
a perfect case of (refined) potential truth approximation, in particular by
(double) concretization. Recall that a sequence of three theories is called a case
of potential truth approximation when the second is closer to the third than
the first is to the third. Two other scientific examples of such sequences have
been studied as well. In (Hettema and Kuipers 1995) it is demonstrated that
Sommerfeld's reconstruction of the sequence consisting of the theories of
Rutherford, of Bohr and his own theory, i.e., the successive stages of what now
is called 'the old quantum theory', is also a case of potential truth approximation. In the previous chapter (Section 10.4.) we explained that the technical
definition of basic and refined truth approximation can also be directed at
socalled provable interesting truths. In (Cools, Hamminga and Kuipers 1994)
it is shown that the theory of the capital structure of firms of Modigliani and
Miller is closer to a provable interesting truth than the original theory of Kraus
and Litzenberger.
In this chapter both examples will be presented as far as the formal aspects
are concerned. It should be stressed that these examples could only be analyzed
with the help of experts in the field and that the following text is heavily based
on the formal sections of the two papers mentioned. We have only added some
remarks about reference.
In sum, we will show that some important examples of successive theory
transition can be reconstructed as cases of potential truth approximation. This
illustrates that the refined definition is apparently in accordance with the basic
heuristics of truth approximation: steps forward in the direction of the truth
are made by improving theories according to shared scientific standards.
11.1. THE OLD QUANTUM THEORY
Introduction
278
279
(the 'old' quantum theory of the atom), in the form as given in Sommerfeld's
Atombau und Spektrallinien. In particular, it will first be shown in structuralist
terms that Bohr's theory is a specialization of Rutherford's theory and that
Sommerfeld's relativistic theory in its turn is a concretization of Bohr's theory.
Then it will be shown, in general, that a specialization followed by a concretization may well be a case of refined truth approximation, which would explain
the increase of explanatory success of the successive theories.
Since its first appearance in 1919, Sommerfeld's Atombau und Spektrallinien
(Atombau for short) served as a comprehensive treatise on the structure of
atoms and spectral lines, and, as such, was sometimes called the 'Bible of
atomic theory'. It was even to survive the birth of quantum mechanics in 1926,
and the book, though supplied with a Wellenmechanischer Ergiinzungsband,
served its purpose well into the thirties. It was revised many a time after its
original publication and first translated into English in 1923. The book was
originally written at a time when, according to Lakatos' reconstruction of the
'old' quantum theory as a research program (Lakatos 1970, 1978), the old
quantum theory was on the verge of entering its degenerative phase. In Lakatos'
reconstruction, Bohr's program had been progressive since its inception in
1913, started to degenerate around 1920, and was finally replaced by the new
quantum theory of Schrodinger and Heisenberg in 1926.2
This points out that, from the perspective of the historian of science, the
Atombau is placed in a peculiar position: it is not very often that a statement
of theoretical principles, particularly when written in such a clear style as that
of the Atombau, survives the replacement of those principles by many years.
Part of this may be due to the fact that, in some parts, Sommerfeld's book
reads like a history of science itself, and has thus remained of interest to
practicing scientists long after the rejection of its theoretical basis. But perhaps
more important in this respect was that, in some parts of atomic theory at
least, the explanations offered by the Atombau were immensely successful in
the explanation of empirical facts and remained so even after the construction
of quantum mechanics.
A very important aim in the development of theories of the atom after the
success of the Bohr (1913a/b) theory of the atom was the explanation of atomic
spectra. Balmer's law, predicting the lines of the hydrogen spectrum, had
already been explained by Bohr's theory. Starting from Bohr's postulates,
Sommerfeld's achievement was twofold. Firstly, he analyzed the theory of
spectra in the presence of perturbing fields in great detail, thus providing an
analysis of the Stark effect in the case of electric and the Zeeman effect in the
case of magnetic fields. Furthermore, he convincingly explained the observed
finestructure in the Balmer lines by blending the quantum principles pioneered
by Bohr with the theory of special relativity. Our discussion will focus on the
second aspect.
Within the context of truth approximation, a large amount of empirical
success is supposed to be indicative of 'being close to the truth'. In this section
280
we will show that the logical structure of the Atombau can well be described
in terms of refined truth approximation. We will start by a selfcontained
presentation of the necessary formalizations, followed by the truthlikeness
analysis and some conclusions. For a sketch of the historical and scientific
background the reader is referred to the original paper.
11.1.1 . Structuralist reconstruction of the relevant theories
Mp(Al)) iff
to the nucleus(center);
(3) rp is a function from T to [0,2nJ: the angular position of
the electron;
(4) E is a function from T to IR1 +: the total energy of the atom.
281
and they will use the notions of the charge of the nucleus ('atom without the
electron'), the (rest) mass and charge of an electron, Planck's constant and the
velocity of light. In the study of the structure of the atom, all these terms were
already supposed to refer, and the (rest) mass and charge of an electron and a
nucleus, Planck's constant and the velocity of light, were even measured, hence
conceived as observable, at the time.
We will now present the models (M) of the four theories of the atom, each
time followed by one or two remarks. Each formalized theory allows one set
of 'energystates', each set characterized by an appropriate set of quantum
numbers. The sets differ for each theory we discuss. Atomic spectra are predicted
by taking the energy difference for two different states in a set and dividing
this difference by the constant of Planck. 'Explanation' obtains when the
so  predicted expression for the frequencies entails the empirical law to be
explained.
We start with Rutherford's theory of the (oneelectron) atom: RA
DEF. 2: x is a Rutherford atom (x
M(RA)) iff
(1) x E Mp(A l)
E(t) =  [f2
Ze 2
r
+ r2tj12]  
(i)
Note that (i) specifies the total energy in terms of the kinetic energy, further
subdivided into a 'radial' and an 'angular' component and the potential energy,
the latter as given by Coulomb's law.
The following theorem, essentially a variant of Kepler's law of the elliptic
orbit of planets, was given by Sommerfeld for the electron in the atom.
We now jump to the crucial theory of Bohr (BA), introducing the quantization hypothesis. Since our present formalization most easily admits
Sommerfeld's formula for the quantization hypothesis, we will use Sommerfeld's
formulation, and derive Bohr's quantization hypothesis from it. Sommerfeld's
282
Quantization hypothesis (QH): there are integers nr, nq> such that (h is
Planck's constant)
m
LX> r dr = nrh
which quantizes the distance to the nucleus and the angle, so both variables
characterize together the elliptic orbit of the electron. We give the formalization
of Sommerfeld's nonrelativistic version of Bohr's theory in the following:
DEF. 3: x is a Bohr atom (x E M(BA)) iff
(1) xEM(RA)
(2) QH
Now it is possible to prove the following:
2n 2 mZ 2 e4 [
1
]
h2
(nq>+nr)2
(ii)
As an aside, it should be noted that the Bohr theory of the atom under this
construction is not equivalent to the 1913 Bohr theory of the atom, where only
circular orbits are allowed. For the plausible reconstruction of the authentic
Bohr theory as a specialization of the one above, we refer to the original paper.
It is important to note that M(BA) is a proper subset of M(RA).
Conceptually, this means that Bohr's theory is a socalled specialization of
Rutherford's theory. More precisely, it is a theoretical core specialization in
the sense that it concerns the models and hence the core of the theory, and the
quantum numbers n" and nr are extra theoretical terms.
Before we define Sommerfeld's relativistic theory of the oneelectron atom,
we first introduce the relativistic version of Rutherford's theory: rRA.
DEF. 4: x is a relativistic Rutherford atom (x E M(rRA)) iff
(1) x E Mp(A 1);
(2) (e: velocity of light; mo : rest mass of the electron)
E(t) = e2 mo
(hfJ2 1) _
1
Ze
r
283
with
Sommerfeld showed that, as for the orbits of planets, the relativistic orbit is
only approximately an ellipse (more precisely, it is an ellipse with an advancing
perihelion):
Theorem 3: in a relativistic Rutherford atom the electron describes a
precessing ellipse (relativistic Kepler orbit), to be precise, there are a
and e such that
with
where P", is the moment of momentum mv, the areal constant. As e goes to
infinity, it is easily seen that yapproaches 1 (see (Sommerfeld 1919, p.254.
Sommerfeld's final goal was the quantization of this relativistic Rutherford
atom, or in our terms, Sommerfeld's theory of the atom: SA.
DEF. 5: x is a Sommerfeld atom (x E M(SA iff
(1) x E M(rRA);
(2) QH.
with
(J,
{ [ 1+
(J,2Z2
(nr + In; 
J
1/2
1 }
(ii )r.1
(J,2 Z2)2
he
284
The proofs of the reduction of, in particular, the two characteristic formulas
(i)rel and (ii)rel require sophisticated mathematical approximation. Be this as it
may, we may conclude that the relativistic models are concretizations of the
respective nonrelativistic models, in the technical sense that the former reduce
to the latter by a limit procedure. Applying the relevant technical definition of
concretization of theories (see Section 10.4.), we may conclude, in view of the
fact that every nonrelativistic Rutherford/ Bohratom has a relativistic concretization, that the relativistic Rutherford theory rRA is a concretization of the
(nonrelativistic) Rutherford theory RA, and also, specifically, that the
Sommerfeld theory SA is a concretization of the Bohr theory BA.
Figure 11.1. depicts the formal situation of the classes of models of the
theories (indicated by the names of the theories).
Mp(A1)
RA
==>
<
In the next subsection we will prove that the sequence RA, BA,SA), due
to these formal relations, is a case of potential (refined) truth approximation.
11.1.2. Truth approximation analysis
In view of the fact that the present example is not a case of two successive
concretizations, the Double Concretization (DC)theorem presented in the
285
previous chapter (Section 10.4.) can not be applied. We need some adapted
version which is suitable for specialization, followed by concretization. It turns
out to be possible to use much of the concretization apparatus as developed
in Section 10.4. In its terms we need some additional technical preparations.
Let us define, for a subset Y of X, when CON(X, Z), Z(Y) as the restriction
of Z to concretizations of members of Y:
Z(Y) = {z E Z j there is y E Y con(y, z)}
Ad. (Ri) Trivial, for Y  X U Z(Y) is empty, which follows directly from
the fact that Y is a subset of X.
Ad. (Rii) Let z E Z(Y), X E X and r(x, z), that is con(x, z), and, due to
the resulting empty intersection of Z( Y) and X, even con + (x, z). From
z E Z(Y) it follows that there must be y E Y such that con(y, z) and even
con+(y', z), due to the resulting empty intersection of Z(Y) and Y. Hence,
due to the unique determination of specific idealizations, x = y and
hence x belongs to Y. Together with con(x, x) we now get ct(x, x, z)
with x E Y.
In words, the theorem says in fact that a specialization of a theory, followed
by a concretization that amounts to the relevant restriction, i.e., specialization,
of a nonoverlapping concretization of the original theory, is a case of potential
refined truth approximation. It also suggests the Truth Approximation by
Specializationj Concretization (TASC)Corollary, which states that if theory Y
is a specialization of theory X and if the true set of nomic possibilities T equals
the relevant specialization of a nonoverlapping concretization of X, then Y is
closer to the truth than X. The assumption that there is such a T, for the
domain of one electron atoms and the frame Mp(Al), is the Nomic Postulate.
The crucial heuristic hypothesis for this truth approximation claim is that T
equals the relevant specialization of a nonoverlapping concretization of X.
Note that, unlike the case of double concretization, there are no further technical conditions on X or Y in the SCTheorem and hence the TASCcorollary.
It is easy to check that the four theories formalized in the previous section,
viz., RA, BA, rRA, SA, satisfy the conditions of the theorem, not only in the
informal sense, but also in the specified formal sense:
286
287
reported by Sommerfeld or others, which may illustrate the 'further relativization' of falsification .
As has been explained in Chapter 9, it is difficult to attach referential truth
approximation to empirical progress, not only since referential truth approximation does not guarantee empirical progress, but, more importantly, since
neither theoretical nor substantial truth approximation imply referential truth
approximation. Hence, although the increase of success was deductive evidence
for the relevant truth approximation hypotheses, it was, at most, nondeductive
evidence for the claim that the quantum numbers refer.
It is interesting to note that the finestructure can be obtained by concretization of the Balmer lines, and hence that the finestructure spectrum reduces to
the Balmer spectrum when the velocity of light goes to infinity. Now it is also
plausible to think that many (if not all) consequences of a concretization of a
theory are concretizations of consequences of that theory. If it concerns (approximately) true observable consequences, then they are explanatory successes. To
illustrate the general point, we have included Theorem 1 and 3 and the reduction
claim of (j)rcl to (j) in the formalization. In view of the fact that the orbit of
an electron is unobservable, Theorem 1 (elliptic orbits for RA and hence
BAatoms) and 3 (precessing ellipse for rRA and hence SAatoms) and the
reduction claim that a precessing ellipse reduces to an ellipse, are simply
theoretical claims.
11 .1.3. Conclusion
288
Lakatos' reconstruction of the constructive phase of the old theory of the atom
can be refined on many points, but his reconstruction of the 'degenerative'
phase is also open to criticism (Radder 1982). Unfortunately, Lakatos' reconstruction largely bypasses the role of Sommerfeld's Atombau, for it could have
profited from a closer consideration of it.
Lakatos' assessment of the situation around 1917 can be summarized in the
contention that the old quantum theory had its major portion of success. In
Lakatos' words, Bohr then 'got worried' (Lakatos 1970, p. 150) about the fate
of the 'old' quantum theory and as a consequence, had passed the initiative to
Sommerfeld. This, in our opinion, is oversimplified. Though one can only
surmise, the main reason for the further development of the theory by
Sommerfeld may have been his somewhat earlier adoption of the new formulation of the quantum conditions, which greatly simplified the theoretical progress. Secondly, Bohr, instead of 'getting worried' about the fate of his theory,
was led into a more rigorous formulation of his theory which he felt could
now no longer be postponed (Darrigol 1992, p. 100), since his theory had
gained a more widespread acceptance and its foundations should therefore be
clear. Our assessment of the situation from Bohr's viewpoint is that (i) from
1913 on Bohr saw his theory as a step towards a more final theory of the atom
but in no way as 'the' theory of the atom; and (ii) anticipating this, he tried to
determine which parts of his theory could be taken over into a more 'final'
theory of the atom and which could not.
Sommerfeld's 1919 Atombau, by its clear exposition of the theoretical principles of the 'old' quantum theory and its detailed consideration of the consequences of these principles, more or less precisely fit it into this intellectual
niche. Therefore, Sommerfeld's work is strongly complementary to Bohr's in
this phase of the development of the quantum theory. The Atombau, by its
clear exposition of the then current status of this rapidly developing field, was
accessible to both theorists and experimenters alike. In the subsequent editions,
the book kept up surprisingly well with the latest experimental and theoretical
developments. As such, it provided an indispensable basis for the training of
the younger generation of theorists who developed and applied quantum
mechanics after 1926, starting from both the achievements of both Bohr and
Sommerfeld. 4
11.2. CAPITAL STR UCTURE THEORY
Introduction
289
This section displays the logical skeleton of "A statepreference model of optimal
financial leverage" by Kraus and Litzenberger (1973). As usual in theoretical
economics, the strategy of model construction is focused on an interesting
theorem (Hamminga 1983). Here, what is to be derived is a theorem on how
the value of (debt plus equity of) a firm depends on the proportion of debt in
that value. Modigliani and Miller (1958/ 1963) used a plausible model to arrive
at implausible, and therefore intriguing theorems: the proportion of debt has
no influence at all on the value of the firm if corporate tax is nonexistent, and
should be 100% if corporate tax exists. The model of Kraus and Litzenberger,
based on a 'state preference' approach, could explain the existence of an optimal
solution between 0% and 100%.
The 'state preference' approach consists of assuming a set r of all conceptually
possible endof  period 'states of the world'. In every single possible state of
the world j there is a fixed and given return X fj for every firm f, a given tax
rate ~ and given bankruptcy cost CJj (C fj = 0 for firms fthat survive in state j).
It is a one period analysis. The return (earnings before interest and taxes)
Xfj of firm f in state j is identical to the endofperiod market value of firm f
If you knew what state j would be realized, you would have no problems of
uncertainty. Hence, the problem of uncertainty consists solely of investors not
knowing what state will be realized, and having different probability beliefs
concerning j E J.
A 'market expectation' about the states j E J is negotiated among investors
by assuming tradeable 'primitive securities' OJ, which can be thought of as
'lottery tickets' yielding 1$ if state j occurs and nothing if j' oF j occurs. The
equilibrium price Pj of OJ could be derived from (I) probability beliefs of
investors concerning state j (2) their time preference for holding money, (3) their
risk aversion and (4) their utility functions. Kraus and Litzenberger do not
perform this derivation, but take Pj' j E J, as given, exogenous variables. Any
other security bought by an investor can now be identified with a definite
number of lottery tickets OJ for every state j (since investors are assumed to
know the return of every security in every state).
Now we are ready to specify what structure a thing x should have in order
to be a conceptual possibility in the language that Kraus and Litzenberger
have chosen for their theory.
290
(x
(1) The symbols J, F, 'I' denote domain sets with the following
meanings:
J: the set of states of the world
F: the set of firms in the market system
'1': the set of primitive securities
(2) The symbols II, D, X, T, C, P, Y, Z, B, S, V denote functions, of which the meaning follows after the second ":":
II: J + '1': the primitive security IIj that has a return of 1$
if state j occurs, and 0$ in any state j' # j.
D: F +~;: the debt of firm f, a promise to pay a fixed
amount D f' irrespective of the state that occurs, a nonnegative real number
X: F x J + ~ x: the return of f in state j, the end of period
value of f, a real number XJj' negative in case of
bankruptcy.
T: J + ~~.11: the tax rate over X in state j, a real number
~ in the closed interval [0, 1].
C: F x J + ~~ : the bankruptcy cost Cf j of f in state j, zero
if the firm survives in state j.
(3) With the help of the primary functions II, D, X, T, C the
secondary functions P, Y, Z, B, S, V are defined:
P: 'I' +~; yields the price Pj of primitive security II j
Y: F x J +~; is the return to holders of debt Df of every
firm f in state j, where
Df for D f 5, X Jj (the firm 'survived')
{
YJj:= XJjCJj for Df>XJj
Z fj =
XJj(1~) + ~Df 
Df for D f 5, XJj
o for D f
291
DEF.2: x is a debt equity market system with corporate tax and bankruptcy cost (x E DETC) iff
(1) x is a DETCp
(2) P is such that primitive securities in 'I' (OJ, j
market equilibrium prices Pj (j E J)5
J) have
B(D,) =
YJjPj
j=1
DI
j=1
k I
j=1
j=1
(XJj  C Jj)Pj + DI
j=k
292
The first of these three functions is meant for the case in which debt D f is such
that the firm survives (will not go bankrupt) in every possible state. The second
function is meant for the cases where debt is such that the firm is bankrupt in
state j = 1, .. ., k  1 and survives in the remaining states. The third function is
meant for the case in which debt is such that in all states the firm will be
bankrupt.
The present market value of debt is thus reduced to the present market value
of primitive securities. The same is done for the equity of f:
(4) S is such that for all
S(D f ) =
ZJjPj
j=1
n
j=1
Xf!
j=k
o iff Df
+ T B(D f ) 
(1 T)
j=1
Figure 11.2. depicts an example of a shape that V(D f) could take for some firm
f The slopes of the line segments are determined by T and Pj (Kraus and
293
V(Df)
MAX
J,
~~
V(O)
I
I
I
Xf1
I
I
I
I
I
X12 X13
I
I
I
X14
X1n
01
Litzenberger 1973, p.916), and every time D J passes the value at which it
equals some state's total return of the firm, XIi' the bankruptcy costs of that
state kick down the value of the function V(D J).
TDETC implies (1) that there can be a unique optimal amount of debt DJ ,
(2) this optimum amount of debt can be 'interior', that is, larger than zero and
smaller than the total return of the firm in the firm's most lucrative state (XJ .).
Figure 11.2. is drawn as an example of this possibility. This possibility arises
as a result of the introduction of bankruptcy cost in the model. If bankruptcy
costs are removed from the structures in DETC and DETCp, a poorer set of
structures, DET(p), results 6, where x E DET(p) is called (potential) debt equity
market system with corporate tax. This brings us back to the original paradoxical
ModiglianiMiller theorem on the optimal amount of corporate debt in the
case of the existence of corporate tax; setting CJj = 0 in TDETC immediately
yields TDET:
Theorem TDET: in all structures x E DET, for all firms f, if ~ is identical
in all states j then V(D J ) = V(O) + T B(D J )
Figure 11.3. depicts the typical shape of V(D J ). TDET implies, like TDETC,
the possible existence of a unique optimal amount of Debt DJ but if it exists,
it is a 'corner solution': the more debt, the higher the value of the firm, therefore
100% debt financing is optimal.
If, finally, tax ~ is removed from the structures 7, we obtain DE(p), where
V(DfJ
V(O)
Xf1
X12 X13
X14
01
294
X E DE(p) is called a (potential) debt equity market system. This yields the initial
ModiglianiMiller Theorem TDE.
XE
This means that the debtequity ratio is irrelevant to the value of a firm. The
graph becomes a horizontal line.
History would have been nice and simple if Modigliani and Miller had first
discovered TDE in the Kraus and Litzenberger framework presented above,
then had introduced tax to arrive at TDET, and finally that Kraus and
Litzenberger enriched the theory with bankruptcy cost to arrive at TDETC.
However, Modigliani and Miller did not use the state preference framework.
For what this example is meant to illustrate below, we do not need to go into
the logical intricacies of reducing Modigliani and Miller's method of introducing market forces to the method used by Kraus and Litzenberger.
11.2.2. Truth approximation analysis
Recall from Subsection 11.2.1. that DE, DET, and DETC are subsets of the
set DETCp of potential Debt Equity market system with corporate Tax and
bankruptcy Costs satisfying the extra conditions 2, 3 and 4 of DEF.2. Moreover,
in the case of DEmodels, the bankruptcy cost function C and the corporate
tax function T are uniformly zero, whereas in the case of DETmodels, only C
is uniformly zero, and in the case of DETCmodels, neither T nor Care
uniformly zero. Note that by trivial consequence DE, DET and DETC are
mutually nonoverlapping.
Let z be a DETCmodel. It is not difficult to check that there is a (unique)
DETmodel y and a (unique) DEmodel x such that con (x, y) and con(y, z),
and hence ct(x, y, z). From this it is a small step to prove directly
MTLct(DE, DET, DETC), i.e., DET is closer to DETC than DE on the basis
of concretization, represented in Figure 11.4. This result is also easy to obtain
295
DETCp
DET
DETC
via the DCtheorem from the fact that DET is convex and mediating and the
combination of CON(DE, DET) and CON(DET, DETC).
Concretization of interesting theorems
296
DETCp
Why does concretization make sense? What further goals were Kraus and
Litzenberg aiming at? For the third, and final, round let us start from the basic
result of Modigliani and Miller to the effect that DE is a subset of VCTDE.
We will omit further reference to the intermediate result concerning (T)DET
and use as many verbal formulations as possible.
The more or less explicit heuristic strategy of Kraus and Litzenberg which
brought them to (T)DETC can be characterized as follows. Look for DE*
and CTDEI such that DE* is a convex and mediating concretization of DE,
CTDE* is a convex and mediating (interesting) concretization of CTDE and
DE* is a subset of VCTDE*. This strategy is depicted in Figure 11.6.
DETCp
EP
DE" VCTDE* is closer to EP" VCTEP than DE /I VCTDE
Figure 11.6. Heuristic strategy: look for DE* and VCTDE* such that ...
297
The motivation for this strategy can be given in terms of an unknown set of
(potential) economically possible systems EP (EPp), including, of course, the
actual ones EA. To be precise, the following heuristic hypotheses form 'the
good reasons' for this strategy.
298
12
QUANTITATIVE TRUTH LIKENESS AND TRUTH
APPROXIMATION
Introduction
In Subsection 10.2.2. we have elaborated our scepsis about the idea of quantitative truth approximation, mainly because it presupposes realvalued distances
between structures. Since such distances are not presupposed by quantitative
confirmation we are even more skeptical about quantitative approaches to
truth approximation than to confirmation. However, we will nevertheless present some main lines of such approaches. We will first deal, in Section 12.1.,
with quantitative actual truthlikeness and truth approximation, which is relatively unproblematic when there is a plausible distance function on Mp. For
the basic nomic case, to be dealt with in Subsection 12.2.1. and part of
Subsection 12.3.1., the situation is also relatively clear, but the refined nomic
case faces us with essentially two types of problems. First, for the idea of
quantitative truthlikeness, discussed in Subsection 12.2.2., we have to find a
plausible distance function between two theories, and hence between two
subsets of Mp, based on a distance function on Mp. Even guided by some
plausible principles, our best refined proposal is not completely satisfactory.
Second, we have to choose between two essentially different ways of measuring
the quantitative success of theories: a nonprobabilistic and a probabilistic way.
In the first case, presented in Subsection 12.3.1., the success, or better the
failure, of a theory in meeting the available evidence is quantitatively expressed
in a similar way as the chosen distance function between two theories. The
second case, presented in Subsection 12.3.2., arises from an adaptation of
Niiniluoto's plausible idea of estimating the distance of a theory from the
complete, hence actual, truth on the basis of the available evidence. It will be
shown that there is, in the case that probabilities as well as distances are
plausible, a very sophisticated way of doing so, where ideas of Carnap, Hintikka
and Festa are combined, leading to 'double' truth approximation. However, it
will also be argued that this primarily concerns improper cases of nomic truth
approximation, that is, it amounts to refined actual truth approximation in
two respects.
The announced results for quantitative nomic truthlikeness and truth approximation are restricted to nonstratified theories based on a finite Mp. Since
they are not yet impressive, we refrain from including considerations dealing
299
300
with stratification, and we will only make a few remarks about extension to
an infinite Mp. We will conclude with some remarks about the intuitive foundation of the quantitative proposals.
In this chapter we will frequently assume or define a distance function or
metric d on Mp(V) = Mp or its powerset P(Mp). For a survey of definitions
and examples of distance functions, see (Niiniluoto 1987a, Ch. 1). We will use
the following general definitions: d: X x X + is a proper distance function (df)
on the arbitrary set X, and <X, d) a metric space iff
(01) d(x, y) ~ 0
(02)
d(x, x) = 0
(03)
d(x, y) = d(y, x)
(04)
(05)
d(x, y) = 0 only if x = y
(symmetry)
(triangle inequality)
Proper
Pseudo
Semi
Quasi
Dl
D2
D3
D4
D5
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
301
transformation of d, in the sense that d(x, y) ~ d(u, v) iff d'(x, y) ~ d'(u, v), then
d' and d agree on all comparative judgments of the form "sAx, y, z) iff Sd'(X, y, z)".
12.1. QUANTITATIVE ACTUAL TRUTH LIKENESS AND TRUTH
APPROXIMATION
Defining and assessing quantitative actual truthlikeness and truth approximation requires definitions of distance functions and success measures which are
specific for the relevant kind of structures. Recall that we argued that there
could not be a general definition of more actual truthlikeness, for the general
notion of more structurelikeness sex, y, z) can not be defined in a general way,
but will have to depend on the type of structures considered. The same holds
for a quantitative notion of structurelikeness or distance between two structures,
and ipso facto for quantitative actual truthlikeness. There is one exception; the
trivial distance function is of course generally defined by d,(x, y) = 1(0) iff x =
y (x # y), which is a proper df
For nontrivial distance functions, we briefly reconsider the different types
of structures dealt with in Chapter 7 10 and will give some hints for definitions.
We start with propositional structures, based on a finite set EP of elementary
propositions. Recall that we represented a propositional constituent by the
subset of the set of elementary propositions containing precisely its unnegated
conjuncts. In this representation, (description) y was said to be at least as
similar to the actual truth t as x if t  y is a subset of t  x and y  t a subset
of x  t, hence y LJ t is a subset of x LJ t.
There is a plausible quantitative variant: the distance dps(x, t) between constituents x and t is the proper df It  x I + It  x I = It LJ x I This definition has
already been proposed by Tichy (1974). It is an example of the socalled
Hamming distance or Clifford measure (Niiniluoto 1987a). It is easy to check
that this definition satisfies the connection principle, i.e., "y is at least as similar
to t as x" in the comparative sense, implies that dps(y, t) ~ dps(x, t).
It is easy to extrapolate this distance to an appropriate quantitative measure
of success. Recall that the data p and n indicate the (mutually exclusive) sets
of elementary propositions of which it has been established at a certain time
that their truthvalue is 'true' (positive) or 'false' (negative), respectively. Recall
also that y is at least as successful at a certain time as x if and only if n n y is
a subset of n n x and p  y is a subset of p  x. The quantitative variant for x
is the number of established mismatches of x: Ip  xl + In n xl, which never
exceeds p + n, and increases from 0 to dps(x, t) when the number of elementary
propositions for which the truthvalue has been established increases.
Alternatively, we may define the success as the number of matches:
Ip n x I + In  x I, which also never exceeds Ip I + In I and increases from 0 to
IEPI  dps(x, t). Note that this definition satisfies the adapted connection principle. However, whereas the qualitative notion satisfies the success principle,
i.e., 'at least as similar' implies 'at least as successful', this is no longer the case
302
for the quantitative definition. But when we assume that the elementary propositions about which the data deal have been randomly selected, it is not difficult
to prove that 'at least as similar' implies that the expected size of the success
is at least as large. We call this the expected success principle.
For first order structures, it is not always clear how to define a distance
function, let alone a quantitative measure of success. However, in certain cases,
there are plausible definitions. For instance, in the case of a language with one
monadic predicate and a fixed domain a symmetric difference definition suggests
itself again. For definitions of distances between firstorder structures, based
on their first order theories, see (Niiniluoto 1987a, Section 10.3.).
On the other hand, in the case of realvalued structures there are usually
one or more wellknown and plausible distance (from the truth) functions, with
related quantitative definitions of the success of a structure description. We
mention just one example. Let the structures (D,f) concern a realvalued
functionf on a fixed domain D, and letft indicate the true function. Then one
plausible definition for dD,f), (D,ft) is the square root of the sum of the
squares of the differences. A similar definition for the success takes the sum
restricted to the subdomain of individuals for which the true value has been
established.
Thus far, we have only mentioned 'absolute' distance and success functions.
Usually it is easy to see how the functions can be 'normalized', i.e., by what
fixed number the values have to be divided to obtain 1 as the maximum
distance. In numeric cases, this number is the total number of possibilities; in
other cases other 'normalization factors' are plausible.
Introduction
+ D;;(X, T) =
ylT  XI
+ y'IX 
TI
where y and y' in [0, 1] are weights for the two different kinds of mistakes. Of
303
course, y' may be equal to (1  y), or, alternatively, y' = y = 1 may hold. In the
last case, the definition is technically analogous to the definition of the quantitative distance between propositional constituents given above, and hence also
an example of a Hamming or Clifford distance.
It is easy to check that Db is a proper df and that it satisfies the connection
principle: MTL(X, Y, T) guarantees Db(Y, T) =:; Db(X, T), which also holds for
the corresponding internal and external clauses separately.
Since Db leaves room for different weights of the two types of mistakes 2, it
is plausible to be more careful in the definition of 'more truthlike'. Let us say
that Y is at least as close to T as X when Y is internally as well as externally
at least as close to T as X.
It is important to note that the condition that Y is internally /externally at
least as close to T as X guarantees that Y is not stronger/weaker than
XnT/XUT, in the (combined) sense that IXnTI=:;IYI=:;IXUTI. We will
say that Db satisfies the boundary principle. The boundary condition, i.e.,
IX n TI =:; IYI =:; IX UTI. is a direct quantitative generalization of comparative
truthlikeness: X n T ; Y ; XU T. When X and T do not overlap, as in many
cases of concretization, then the boundary condition reduces to
o=:; IYI =:; IXI + ITI The latter condition will, in general, be called the weak
boundary condition, giving rise to the weak boundary principle.
Note that the result of replacing in Db(X, T) y and y' by y/(y + y') and
y'/(y + y'), respectively, leads to an order preserving distance function (see the
Introduction). Hence, in this and in all cases below where two separate weights
appear, the function can be transformed into order preserving functions by
replacing y' by (1  y).
There is also a plausible 'averaged' version of Db:
D:(X, T) = Dt(X, T) + D:e(X, T) = ylT  XI/ITI
D:
+ y'IX  TI/IXI
304
+ y'm(X 
T)
Since there may be nonempty sets with measure 0, D bm may well be a pseudo
df It is easy to check that Dbm satisfies the connection principle and the relevant
version of the boundary principle, i.e., with m(X n T) :::;; m( Y) :::;; m(X U T) as
boundary condition.
There is again a plausible way of 'averaging' Dbm(X, T), viz.,
Dtm(X, T)
= ym(T 
X}/m(T)
+ im(X 
T)/m(X)
= yD~(X, T) + y'D:;(X, T)
The internal distance will always have to do with T, but the external distance
mayor may not be directly related to Mp  T Again, we only say that Y is
quantitatively at least as close to T as X when Y is internally as well as
externally at least as close to T as X.
What kind of df we want is one thing; that there will be other desiderata is
another. The following additional desiderata seem plausible, at least at first
sight:
connection principle:
boundary principle:
reduction principle:
We have already seen that Db satisfies the first two principles, whereas the third
is, of course, not relevant for Do. The connection principle states that 'comparatively more truthlike', MTLsAX, Y, T), guarantees 'quantitatively closer to the
truth', DAY, T):::;; DAX, T). (Recall that Sd is s based on d.) The boundary
principle maintains that 'quantitatively closer to the truth' is only possible for
305
not too strong and not too weak theories, where the boundaries are suggested
by the quantitative boundaries of basic comparative truthlikeness. Finally, the
reduction principle states that when the lower level distance function is the
trivial one, the 'distance from the truth' becomes the basic one.
The desiderata may turn out to be realizable or not. Note that the principles
may be decomposed into an internal and an external principle, when the
definition itself can be split up in that way. When relevant and not mentioned
otherwise, they will be supposed to be imposed and satisfied in this split way.
Note also that the boundary principle is a principle which can simply be added
to a provisional definition when not yet satisfied by that definition.
Let us now review a couple of prima facie plausible refined definitions. Since
our experience with 'average' versions is not very impressive, we neglect them.
Let us first look at Niiniluoto's main proposal for a truth likeness function
and see whether it is useful for our purposes. Niiniluoto takes the truth to be
a complete theory, hence, in our representation, he wants to define the distance
between a subset X of Mp and 'the true structure' t, on the basis of an
underlying df For finite Mp, his favorite proposal is the minsumfunction
(Niiniluoto 1987a, p. 216, (44)):
DN(X, t)
= ydmin(t, X) + y'dsum(t, X)
where d.um(t, X) is defined as LXE xd(t, X)/ L XE Mpd(t, x). Moreover, Niiniluoto
assumes that d is balanced in the sense that LXE Mpd(t, x) = (1 /2)IMpl, hence
dsum(t, X) amounts to 2LxEXd(t, x)/ IMpl. Hence, the second term, although
simply called 'sum', is a kind of relativized sum of distances. Niiniluoto defines,
in general, the verisimilitude of X as 1  D(X, t). He makes clear that DN is, in
many respects, very satisfactory for his particular conception of the verisimilitude problem. However, for our purposes, DN is not (yet) useful, for we consider
the nomic truth as a set, viz., T. Recall, in general, that Niiniluoto and others
conceive truth approximation as a matter of approaching 'elements by sets',
whereas we distinguish actual truth approximation as a matter of approaching
elements by elements, and nomic truth approximation as a matter of
approaching sets by sets.
Niiniluoto (1987a, p. 381 /2) rightly suggested that, under certain conditions,
our problem can be rephrased in terms of nomic constituents (see Zwart 1998,
Section 2.6.) for a qualitative elaboration of this idea). When Mp is finite, "T =
X" is, informally speaking, the finite conjunction of IMp I elementary claims:
for every x in X it says that it is nomically possible and for every x not in X
that it is nomically impossible. Its qualitative distance from the truth can be
expressed by the set of elementary claims that are false, which is, of course, a
symmetric difference, and the quantitative distance by the size of that set.
Hence, the resulting quantitative distance is equal to the most simple basic
distance function Db(X, T) = IT  X I + IX  TI (with y' = y = 1), since there is
a 11 function between T L'l X and the false elementary claims. Hence, by
rephrasing our problem in terms of nomic constituents, we reach a new level
306
of something like propositional constituents, such that the nomic truth becomes
complete, and hence a kind of actual truth. Although Niiniluoto's minsum
proposal now leads to a proposal for the distance of a set of nomic constituents
to the truth, he has to presuppose for such a proposal a distance function
between constituents. As we have seen, the most plausible first proposal essentially amounts to our basic distance function. Hence, Niiniluoto's approach
does not yet give a refinement of this basic function, the ultimate topic of
this section.
The most plausible suggestion for our purposes is the following
Dl(X, T) = ~'E r DN(X, t) = Y~'E rdmin(t, X) + y'~tE rdsum{t, X)
& Z E T}
It is easy to check that it is, at most, a pseudo semi distance function, for (D4)
and (DS) will normally not be satisfied. Moreover, it is not satisfactory in the
light of the desiderata, since none of them is fulfilled.
A decomposable function arises by taking the weighted sum of the minimal
and the maximal mistake:
D3(X, T)
ZE
T}
307
One may also take, in both cases, the maximal mistakes, or the minimal ones:
Ds(X, T) = y max {d(x, z)/x E X & Z E T  X}
T & Z E T}
E T  X}
 T & Z E T}
+ y'~x
X dmin(x, T)
Here, the minimum distance between e.g., Z and X, dmin(z, X), is defined as the
minimum of d(z, x) for all x in X. D7 is discussed by Niiniluoto (1987a, p. 246)
in the general form of a distance function between statements. Intuitively, D7
is very attractive for our purposes. It is a weighted sum of the sums of the
minimal internal and external mistakes, respectively. D7 is even a proper
distance function, which is easy to check, except for the triangle inequality.
Moreover, it satisfies the reduction principle, which is a direct consequence of
the fact that e.g., dmin(z, X) for trivial d amounts to 1 when Z : X and 0 when
Z E X, and hence ~z E Td min (z, X) amounts to 1
T  Z I.
However, the other principles are not valid in general. The boundary principle
is not valid in general, but, as suggested earlier, this might be imposed as an
extra condition. Hence, more important is the fact that only the internal side
of the connection principle holds unconditionally: when (Rii) applies to
<X, Y, T) then D~(Y, T) ~ D~(X, T). On the other hand, if (Ri) applies to
<X, Y, T) then D;(Y, T) ~ D;(X, T) iff
~YEYXuTdmin(Y, T) ~ ~XE x YuTdmin(x, T).
This strong version of (Ri) is trivially satisfied in the case of basic truthlikeness. However, it need not be satisfied in other cases, e.g., in the context of
concretization. For consider the gas model example of Subsection 10.4.2. It is
easy to check that, for instance, the concretization triple of (sets of) gas models
<JGM, GMa, WGM) does not satisfy (RiQ), simply due to the fact that the
set GMa is (much) larger than IGM. As a consequence, it may well be the case
that all plausible lower level distance functions in this case, if any, are such
that D;(GMa, WGM) is larger than D;(IGM, WGM), notwithstanding the
fact that GMa is closer to WGM than IGM, not only according to our
308
Introduction
When dealing with comparative truth likeness, it was plausible to evaluate the
truth approximation hypothesis, Y closer to the truth than X , in terms of a
comparative notion of more successfulness. In the present, quantitative context,
we have essentially two different ways of evaluation. In the line of the comparative case, we may define quantitative notions of success, corresponding to the
relevant quantitative notion of truthlikeness. Or, for that matter, a failure
function 6 corresponding to the relevant distance from the truth. A further
desideratum then is, of course, that 'closer to the truth' guarantees a lower
'failure value' or at least a lower 'expectation value' of that value, assuming
that the data have arisen from random experimentation. However, the suggested
notion of quantitative failure is essentially nonprobabilistic. It is the subject
of the first subsection.
We also have now the possibility of taking into account the posterior probability of the theories which have not yet been falsified . With the help of them,
309
and a distance from the truth definition, we can calculate the 'estimated distance
from the truth' for each theory. Hence, this is a straightforwardly probabilistic
evaluation of the merits of a theory in approaching the truth.
For the case of a complete truth, Niiniluoto (1987a) has propagated and
developed this idea convincingly. That is, apart from the fact that the assumption that theories are aiming at the complete truth about the actual world, it
is a very plausible idea, which we will adapt for approaching the nomic truth.
In both cases we will restrict the attention to finite Mp.
12.3.1. Nonprobabilistic (quantitative) truth approximation
which does not exceed y IR 1+ y' 151, and increases from 0 to Db(X, T) .
As in the case of success related to the actual truth, only the expectation
version of the success principle holds: if Db(Y, T) ~ Db(X, T) then the
(internal/external) failure value of Y may be expected to be not higher than
the (internaljexternal) failure value of X as far as R can be conceived as the
result of random selection of IR I members of T and S as the result of random
selection of IMp  SI members of Mp  T. It is easy to check that Fb(X, Rj S)
satisfies the relevant version of the connection principle, viz., if Y is qualitatively
more successful than X , MS(X, Y, Rj S), then Y has a lower, or at least not a
higher, failure value, Fb(Y, RjS) ~ Fb(X, RjS) . Note finally that Fb(X, Rj S) satisfies the relevant version of the boundary principle, viz., Fd ( Y, Rj S) ~ FAX, Rj S)
implies IX n RI ~ IYI ~ IX U SI.
Let us now turn to refined notions of success Fd , based on an underlying
proper distance function d. As in the case of distance from the truth, we will
assume, when relevant, weighted versions, of which the components can now
be indicated by the relevant data:
FAX, Rj S) = yFAX, R) + y'FAX, S)
310
F reduction principle:
Fdt(Y, RjS) S; Fdr(X, RjS) => Fb(Y, RjS) S; Fb(X, RjS)
S;
This principle maintains that 'closer to the truth', though not guaranteeing a
lower failure value, leads to a lower expected value (EV) of the failure value,
where the expected value is based on an appropriate random way of obtaining
Rand S. Recall that the basic definition of failure satisfies the F connection,
the Fboundary and the ES  principle. The Freduction principle is, as in the
case of the basic distance from the truth, not relevant for basic success.
Recall that
D7 (X, T) = YL z ET dmin(z, X)
+ y'LXE Xdmin(X, T)
was, relatively speaking, the best 'distance from the truth' function. It suggests
the following failure function:
It is easily seen to satisfy the F reduction principle and the F boundary principle
can again simply be imposed. Moreover, the internal or Rside satisfies the
F connection principle, which is easy to check. Moreover, it satisfies the
ESprinciple.
However, the Sside does not unconditionally satisfy the F connection and the
ESprinciple. But under certain conditions it does. For the Sside of
the Fconnection principle, essentially the same thing holds as for the external
side of the 'D'connection principle. If (Ri)S applies to (X, Y, S> then
F7(Y,S)S;F7(X,S) iff LYEYXvsdmin(y,S)S;LxEXyvsdmin(X,S). Moreover, if
we strengthen (Ri)S to:
(RiQ)S
311
trivially satisfied in the case of basic truthlikeness. But it need not be satisfied
in other cases, in particular in the case of concretization. For D7 we illustrated
this with the gas model example of Subsection 10.4.2. This illustration can now
formally be simply reproduced, whereas WGM is now conceived not as the
truth, but 'merely' as an established law. But again, there is another sufficient condition for 'successful' double concretization. If the adapted version
of the weak boundary condition applies, i.e., 0::;; IYI ::;; ISI then
dmean(Y, S)::;; IXI /(IXI + lSI) x dmean(X, S) is a sufficient condition for the target
external inequality.
Turning now to the Sside of the ESprin