You are on page 1of 9

1

PROBABILITY (WAHRSCHEINLICHKEIT)

1. Historical introduction

The concept of probability is used in everyday language, mathematics, and philosophy. Aristotle defined 'probable' (Gr. 'eoikota') as that which usually occurs. The term 'pithanon' was used in the Greek rhetorical and sceptical traditions to characterize plausible opinions or likely sense impressions. This term was translated into Latin by Cicero as 'probabile' and 'veri simile'. Most modern Western languages have followed him: the standard term in English is 'probability', while some languages include a reference to the concept of truth (Lat. verum), e.g., 'Wahrscheinlichkeit' in German and 'sannolikhet' in Swedish. Especially due to Karl Popper's work since the 1960s, the terms 'truthlikeness' and 'verisimilitude' in English ('Wahrheitshnlichkeit' in German) have obtained a special meaning as 'closeness to the truth' which does not have the same sense and structure as 'probability'.1 Medieval and renaissance philosophers assigned the concept of 'probabilitas' to beliefs or opinions by counting the number of authorities who support them.2 The mathematical calculus of probabilities, inspired by games of chance, was developed by Blaise Pascal and Pierre Fermat in the 1660s, continued by Thomas Bayes and Marquis de Laplace. An axiomatic treatment of probability by using modern measure theory was given by A.N. Kolmogorov in 1933.3 In the 20th century, philosophers have worked out precise interpretations of the concept of probability. The main division between physical and epistemic

2 interpretations has its roots in the classical discussions where probability was linked both to frequently occurring events and to degrees of rational belief.4

2. Mathematical probability

The classical or Laplacean definition of probability assumes a framework of equally possible basic outcomes of an experiment (e.g., tossing a coin, rolling a dice), and defines the probability of an event as the number of favourable cases divided by the number of all cases.5 It follows that the probability P(A) of an event A is a number between 0 and 1, where P(A) = 0 for an impossible event A and P(A) = 1 for a sure or necessary event A. Let AB be the event that at least one of A and B occurs, and AB the event that both A and B occur. According to the Principle of Additivity, if A and B are mutually exclusive events, the probability of their disjuction P(AB) equals the sum of P(A) and P(B). Further, the probability that A does not occur is 1 - P(A). The conditional probability P(A/B) of A given B is defined as the ratio P(AB)/P(B). Events A and B are probabilistically independent if the probability P(AB) equals the product of P(A) and P(B), i.e., P(A/B) = P(A). The classical theory of chance identifies correctly the basic mathematical properties of probability. The main addition in Kolmogorov's axiomatization is the generalization of the additivity requirement to an infinite number of disjucts. This allows a precise proof of the Law of Large Numbers: if rfn(A) is the relative frequency of event A in a series of n independent repetitions of an experiment, then it can be proved that with probability 1 the value of rfn(A) approaches in the limit the value P(A) when n grows to infinity.

3 However, the applicability of the classical definition is severely restricted by the assumption that the basic cases have to be symmetric or "equally possible". This is not true e.g. for a loaded dice, and it fails in most applications of probability to natural and social phenomena (such as weather, mortality, annuities, and errors of measurement). Its unlimited use has also led to logical paradoxes.

3. Physical probability

Physical (material, empirical) interpretations assert that probability is a real magnitude like length and weight, and it can be measured in an objective way by statistical data.

3.1. Probability as relative frequency

Many chance phenomena seem to have stable relative frequencies: the number of tails in a sufficiently long series of tosses is close to 1/2. It would be arbitrary to identify probability as relative frequency within some finite series, but it could be defined as the limit of such relative frequencies when the series is repeated ad infinitum. In this sense, probability is an idealization of observable long-run frequencies. An alternative hypothetical formulation says that the probability of an event (or attribute) is the limit toward which its relative frequency would converge in an infinite series (or reference class).6 Proposed by R.L. Ellis in 1843, the first serious attempt to formulate the frequency interpretation was made by John Venn in 1866 in The Logic of Chance. Influenced by Venn, Charles S. Peirce defined the probability of an argument as a truthfrequency, i.e., as the relative number of cases where the argument leads from true

4 premises to a true conclusion. Later attempts to make the frequency interpretation precise include the works of Richard von Mises and Hans Reichenbach. The main technical difficulty is to characterize in a consistent way the "random sequences" or "collectives" relative to which the limits of relative frequencies should remain stable. The frequency definition has been the common background assumption of R.A. Fisher's approach and the "orthodox" Neyman-Pearson theory of statistical inference in the 20th century. One criticism of this approach notes that, according to the Law of Large Numbers, the equality of probability and the limit of relative frequency holds only "almost surely" or with probability 1, and it should not be made an analytic truth by stipulation.7 Another criticism is that this interpreation applies probability only to repeatable event types, so that it does not make sense to speak of the probability of unique or singular events (e.g., the probability of rain in Hamburg on January 1, 2000) or of the probability of hypotheses (e.g., Einstein's theory of relativity). An attempt to handle the latter problem by the concept of "weight" was made by Reichenbach.

3.2. Probability as propensity

Already Leibniz suggested that probability should be understood as "degree of possibility". The idea that there are real possibilities in nature, independently of epistemic uncertainty, was discussed by A.A. Cournot and C.S. Peirce in the 19th century. Following the principles of his indeterministic "tychism", Peirce proposed in 1910 that probability should be understood as a dispositional "habit" or "would-be". This interpretation of physical probability as propensity was reintroduced in 1959 by Karl Popper in his discussion of quantum mechanics.

5 According to the long-run propensity interpretation, probability is the disposition of a chance set-up to produce series of events with characteristic relative frequencies. This formulations does not yet solve the problem of unique events. The single-case propensity interpretation defines probability as the dispositional strength of a chance set-up to produce an outcome of a certain kind on a single trial of that set-up. Such propensities between 0 and 1 are thus "degrees of possibility" for events that are not completely determined by objective antecedent or causal conditions. Single-case propensity statements become testable by observable relative frequencies if there is a sufficient number of similar set-ups (e.g., atoms of the same radioactive substance).8

4. Epistemic probability

Epistemic or doxastic interpretations take probability to be always relative to our knowledge. Laplace, who supported determinism, asserted that probability is an expression of our ignorance of the real causes of events. According to his Principle of Indifference, two events should be treated as "equally possible" if we do not know of any reason to prefer one to another. Later Bayesians define probabilities as rational degrees of belief.

4.1. Subjective probability

According to the subjective or personal interpretation, the probability P(H/E) of a hypothesis H given available evidence E is the degree of belief in the truth of H warranted by E. The tool for studying such probabilities is Bayes' Theorem which states

6 that the posterior probability P(H/E) is proportional to the product of the prior probability P(H) of H and the likelihood P(E/H) of H relative to E. Psychological studies show that the actual intensities of beliefs of human agents do not always behave in the manner of mathematical probability. However, as Frank Ramsey and Bruno de Finetti showed in the 1920s, assuming some rationality conditions among the agent's comparative judgments and preferences, it can be proved that "rational" degrees of belief can be represented by numerical values that satisfy the axioms of probability. Ramsey's results were later generalized in the Bayesian decision theory. De Finetti's theorems characterize rational degrees of belief as coherent betting ratios.9

4.2. Logical probability

Some philosophers, like John Maynard Keynes (1921), have tried to show that there are enough rationality constraints to make degrees of belief or "degrees of confirmation" unique. Usually such suggestions are based upon principles of epistemic indifference or informational equality. Rudolf Carnap applied in the 1940s formal tools to construct a system of inductive logic, where the probabilities of statements in a simple first-order language with individual names and one-place properties can be determined. In the 1950s he generalized this approach to a continuum of inductive probability measures. As a reply to Popper's criticism, Carnap also distinguished "degrees of confirmation" in two senses: as the posterior probability (i.e., P(H/E)) and as the increase of probability of H due to E (i.e., P(H/E) - P(H)).10 Carnap understood logical probabilities as degrees of partial entailment between propositions. One difficulty with this view is that such degrees seem to depend

7 on parameters which express some kind of context-dependent regularity assumptions. Hence, logical probabilities are not completely objective, but relative to some empirical or subjective assumptions. Another problem for Carnap is that in his system all genuinely universal generalizations (such as 'All ravens are black' where the domain is not restricted to any finite number of objects) have the probability 0 given any finite singular evidence. This problem was solved in 1964 by Jaakko Hintikka whose system of inductive logic allows universal generalizations to receive non-zero probabilities.11

5. Conclusion

Some philosophers are probabilistic "monists" in the sense that they try to reduce all usage of this concept to only one interpretation. Other philosophers favour "pluralism". For example, Carnap argued that frequentist and logical probabilities both exist in different contexts independently of each other. Another kind of pluralism would be to accept the single-case propensity interpretation for the concept of probability in scientific laws, and the personal Bayesian concept for the treatment of epistemic uncertainty within scientific inference.12

NOTES

1 See Popper 1963, Niiniluoto 1987. 2 See Byrne 1968. 3 See von Plato 1994.

8 4 See Hacking 1975. 5 See Laplace 1951. 6 See Salmon 1966. 7 See Stegmller 1973. 8 See Fetzer 1981, Suppes 1984, Fetzer 1988. 9 See Kyburg and Smokler 1964, Savage 1954. 10 Carnap 1962, Popper 1963. 11 See Hintikka and Suppes 1965. 12 This is the original English version of the paper that has appeared, translated into German by Silja Freudenberger, as the entry Wahrscheinlichkeit in H.J. Sandkhler (ed.), Enzyklopdie Philosophie I-II, Felix Meiner Verlag, Hamburg, 1999, pp. 1731-33.

BIBLIOGRAPHY

Byrne, E., 1968, Probability and Opinion: A Study of Medieval Presuppositions of Post-Medieval Theories of Probability, The Hague. Carnap, R., 1962, The Logical Foundations of Probability, 2nd ed., Chicago. Fetzer, J., 1981, Scientific Knowledge, Dordrecht. Fetzer, J. (ed.), 1988, Probability and Causality, Dordrecht. Hacking, I., 1975, The Emergence of Probability, Cambridge. Hintikka, J. and Suppes, P., 1966, Aspects of Inductive Logic, Amsterdam. Keynes, J.M., 1921, A Treatise on Probability, London. Kyburg, H.E. and Smokler, H. (ed.), 1964, Studies in Subjective Probability, New York. Laplace, P.S., 1951, A Philosophical Essay on Probabilities, New York. Niiniluoto, I., 1987, Truthlikeness, Dordrecht. von Plato, J., 1994, Creating Modern Probability, Cambridge. Popper, K.R., 1963, Conjectures and Refutations, London. Salmon, W., 1966, The Foundations of Scientific Inference, Pittsburgh. Savage, L.J., 1954, The Foundations of Statistics, New York. Stegmller, W., 1973, Personelle und statistische Wahrscheinlichkeit, Berlin. Suppes, P., 1984, Probabilistic Metaphysics, Oxford.

You might also like