Randomness is Unpredictability


Axrox. Exoir
Exrrra Coiiror, Oxroau, ox1 3ur
antony.eagle@philosophy.oxford.ac.uk
Abstract The concept of randomness has been unjustly neglected in recent
philosophical literature, and when philosophers have thought about it, they
have usually acquiesced in views about the concept that are fundamentally
flawed. After indicating the ways in which these accounts are flawed, I pro-
pose that randomness is to be understood as a special case of the epistemic
concept of the unpredictability of a process. This proposal arguably captures
the intuitive desiderata for the concept of randomness; at least it should sug-
gest that the commonly accepted accounts cannot be the whole story and
more philosophical attention needs to be paid.
[R]andomness. . . is going to be a concept which is relative to
our body of knowledge, which will somehow reflect what we
know and what we don’t know.
Hrxa. E. K.atao, Ja. (1974, 217)
Phenomena that we cannot predict must be judged random.
Pxrarck Strrrs (1984, 32)
The concept of randomness has been sadly neglected in the recent philosophi-
cal literature. As with any topic of philosophical dispute, it would be foolish to
conclude from this neglect that the truth about randomness has been established.
Quite the contrary: the views about randomness in which philosophers currently
acquiesce are fundamentally mistaken about the nature of the concept. More-
over, since randomness plays a significant role in the foundations of a number

Forthcoming in British Journal for the Philosophy of Science.
1
of scientific theories and methodologies, the consequences of this mistaken view
are potentially quite serious. After I briefly outline the scientific roles of ran-
domness, I will survey the false views that currently monopolise philosophical
thinking about randomness. I then make my own positive proposal, not merely
as a contribution to the correct understanding of the concept, but also hopefully
prompting a renewal of philosophical attention to randomness.
The view I defend, that randomness is unpredictability, is not entirely with-
out precedent in the philosophical literature. As can be seen from the epigraphs
I quoted at the beginning of this paper, the connection between the two concepts
has made an appearance before.
1
These quotations are no more than suggestive
however: these authors were aware that there is some kind of intuitive link, but
made no efforts to give any rigorous development of either concept in order that
we might see how and why randomness and prediction are so closely related.
Indeed, the Suppes quotation is quite misleading: he adopts exactly the perni-
cious hypothesis I discuss below (§3.2), and takes determinism to characterise
predictability—so that what he means by his apparently friendly quotation is ex-
actly the mistaken view I oppose! Correspondingly, the third objective I have in
this paper is to give a plausible and defensible characterisation of the concept of
predictability, in order that we might give philosophical substance and content to
this intuition that randomness and predictability have something or other to do
with one another.
2
1 Randomness in Science
The concept of randomness occurs in a number of different scientific contexts. If
we are to have any hope of giving a philosophical concept of randomness that is
adequate to the scientific uses, we must pay some attention to the varied guises in
which randomness comes.
All of the following examples are in some sense derivative from the most
central and crucial appearance of randomness in science—randomness as a pre-
requisite for the applicability of probabilistic theories. Von Mises was well aware
of the centrality of this role; he made randomness part of his definition of proba-
bility. This association of randomness with von Mises’ hypothetical frequentism
has unfortunately meant that interest in randomness has declined with the fortunes
of that interpretation of probability. As I mentioned, this decline was hastened by
the widespread belief that randomness can be explained merely as indeterminism.
Both of these factors have lead to the untimely neglect of randomness as a cen-
1
Another example is more recent: ‘we say that an event is random if there is no way to predict
its occurrence with certainty’ (Frigg, 2004, 430).
2
Thanks to Steven French for emphasising the importance of these motivating remarks.
2
trally important concept for understanding a number of issues, among them being
the ontological force of probabilistic theories, the criteria and grounds for accep-
tance of theories, and how we might evaluate the strength of various proposals
concerning statistical inference. Especially when one considers the manifest in-
adequacies of ontic accounts of randomness when dealing with these issues (§2),
the neglect of the concept of randomness seems to have left a significant gap in the
foundations of probability. We should, however, be wary of associating worries
about randomness too closely with issues in the foundations of probability—those
are only one aspect of the varied scientifically important uses of the concept. By
paying attention to the use of the concept, hopefully we can begin to construct an
adequate account that genuinely plays the role required by science.
1.1 Rxxuom S.srrms
Many dynamical processes are modelled probabilistically. These are processes
which are modelled by probabilistic state transitions.
3
Paradigm examples include
the way that present and future states of the weather are related, state transitions
in thermodynamics and between macroscopic partitions of classical statistical me-
chanical systems, and many kinds of probabilistic modelling. Examples from
‘chaos theory’ have been particularly prominent recently (Smith, 1998).
For example, in ecohydrology (Rodriguez-Iturbe, 2000), the key concept is
the soil water balance at a point within the rooting depth of local plants. The dif-
ferential equations governing the dynamics of this water balance relate the rates
of rainfall, infiltration (depending on soil porosity and past soil moisture content),
evapotranspiration and leakage (Laio et al., 2001, Rodriguez-Iturbe et al., 1999).
The occurrence and amount of rainfall are random inputs.
4
The details are in-
teresting, but for our purposes the point to remember is that the randomness of
the rainfall input is important in explaining the robust structure of the dynamics of
soil moisture. Particular predictions of particular soil moisture based on particular
volumes of rainfall are not nearly as important for this project as understanding
the responses of soil types to a wide range of rainfall regimes. The robust prob-
abilistic structures that emerge from low-level random phenomena are crucial to
the task of explaining and predicting how such systems evolve over time and what
consequences their structure has for the systems that depend on soil moisture, for
example, plant communities.
5
3
This is unlike the random mating example (§1.2), where we have deterministic transitions
between probabilistically characterised states.
4
These are modelled by a Poisson distribution over times between rainfall events, and an
exponential probability density function over volumes of rainfall.
5
Strevens (2003) is a wonderful survey of the way that probabilistic order can emerge out of
the complexity of microscopic systems.
3
Similar dynamical models of other aspects of the natural world, including con-
vection currents in the atmosphere, the movement of leaves in the wind, and the
complexities of human behaviour, are also successfully modelled as processes
driven by random inputs. But the simplest examples are humble gaming devices
like coins and dice. Such processes are random if anything is: the sequence of out-
comes of heads and tails of a tossed coin exhibits disorder, and our best models of
the behaviour of such phenomena are very simple probabilistic models.
At the other extreme is the appearance of randomness in the outcomes of sys-
tems of our most fundamental physics: quantum mechanical systems. Almost
all of the interpretations of quantum mechanics must confront the randomness of
experimental outcomes with respect to macroscopic variables of interest; many
account for such randomness by positing a fundamental random process. For in-
stance, collapse theories propose a fundamental stochastic collapse of the wave
function onto a particular determinate measurement state, whether mysteriously
induced by an observer (Wigner, 1961), or as part of a global indeterministic dy-
namics (Bell, 1987a, Ghirardi et al., 1986). Even no-collapse theories have to
claim that the random outcomes are not reducible to hidden variables.
6
1.2 Rxxuom Brnxvrota
The most basic model that population genetics provides for calculating the dis-
tribution of genetic traits in an offspring generation from the distribution of such
traits in the parent generation is the Hardy-Weinberg Law (Hartl, 2000, 26–9).
7
This law idealises many aspects of reproduction; one crucial assumption is that
mating between members of the parent generation is random. That is, whether
mating occurs between arbitrarily selected members of the parent population does
not depend on the presence or absence of the genetic traits in question in those
members. Human mating, of course, is not random with respect to many genetic
traits: the presence of particular height or skin colour, for example, does influence
whether two human individuals will mate. But even in humans mating is random
with respect to some traits: blood group, for example. In some organisms, for in-
stance some corals and fish, spawning is genuinely random: the parent population
gathers in one location and each individual simply ejects their sperm or eggs into
the ocean where they are left to collide and fertilise; which two individuals end up
mating is a product of the random mixing of the ocean currents. The randomness
of mating is a pre-requisite for the application of the simple dynamics; there is no
explicit presence of a random state transition, but such behaviour is presupposed
6
For details, see Albert (1992), Hughes (1989).
7
The law relates genotype distribution in the offspring generation to allele distribution in the
parents.
4
in the application of the theory.
Despite its many idealisations, the Hardy-Weinberg principle is explanatory of
the dynamics of genetic traits in a population. Complicating the law by making its
assumptions more realistic only serves to indicate how various unusual features of
actual population dynamics can be deftly explained as a disturbance of the basic
underlying dynamics encoded in the law. As we have seen, for some populations,
the assumptions are not even idealised. Each mating event is random, but never-
theless the overall distribution of mating is determined by the statistical features
of the population as a whole.
Another nice example of random behaviour occurs in game theory. In many
games where players have only incomplete information about each other, a ran-
domising mixed strategy dominates any pure strategy (Suppes, 1984, 210–2).
Another application of the concept of randomness is to agents involved in the
evolution of conventions (Skyrms, 1996, 75–6). For example, in the convention
of stopping at lights which have two colours but no guidance as to the intended
meaning of each, or in the reading of certain kinds of external indicators in a game
of chicken (hawk-dove), the idea is that the players in the game can look to an ex-
ternal source, perceived as random, and take that as providing a way of breaking
a symmetry and escaping a non-optimal mixed equilibrium in favour of what Au-
mann calls a correlated equilibrium. In this case, as in many others, the epistemic
aspects of randomness are most important for its role in scientific explanations.
1.3 Rxxuom Sxmrirxo
In many statistical contexts, experimenters have to select a representative sample
of a population. This is obviously important in cases where the statistical proper-
ties of the whole population are of most interest . It is also important when con-
structing other experiments, for instance in clinical or agricultural trials, where
the patients or fields selected should be independent of the treatment given and
representative of the population from which they came with respect to treatment
efficacy. The key assumption that classical statistics makes in these cases is that
the sample is random (Fisher, 1935).
8
The idea here, again, is that we should ex-
pect no correlation between the properties whose distribution the test is designed
to uncover and the properties that decide whether or not a particular individual
should be tested.
8
See also Howson (2000), pp. 48–51 and Mayo (1996). The question whether Bayesian statis-
tics should also use randomisation is addressed by Howson and Urbach (1993), pp. 260–74. One
plausible idea is that if Bayesians have priors that rule out bizarre sources of correlation, and ran-
domising rules out more homely sources of correlation, then the posterior after the experiment has
run is reliable.
5
In Fisher’s famous thought experiment, we suppose a woman claims to be able
to taste whether milk was added to the empty cup or to the tea. We wish to test
her discriminatory powers; we present her with eight cups of tea, exactly four of
which had the milk added first. The outcome of a trial of this experiment is a
judgement by the woman of which cups of tea had milk. The experimenter must
strive to avoid correlation between the order in which the cups are presented, and
whatever internal algorithm the woman uses to decide which cups to classify as
milk-first. That is, he must randomise the cup order. If her internal algorithm is
actually correlated with the presence of milk-first, the randomising should only
rule out those cases where it is not, namely, those cases where she is faking it.
An important feature of this case is that it is important that the cup selection
be random to the woman, but not to the experimenters. The experimenters want a
certain kind of patternlessness in the ordering of the cups, a kind of disorder that
is designed to disturb accidental correlations (Dembski, 1991). The experimenters
also wish themselves to know in what order the cups are coming; the experiment
would be uninterpretable without such knowledge. Intuitively, this order would
not be random for the experimenters: they know which cup comes next, and they
know the recipe by which they computed in which order the cups should come.
1.4 Cxrarcr, Aaarraxarxrss, xxu Norsr
John Earman has argued that classical Newtonian mechanics is indeterministic,
on the basis of a very special kind of case (Earman, 1986, 33–39). Because New-
tonian physics imposes no upper bound on the velocity of a point particle, it is
nomologically possible in Newtonian mechanics to have a particle whose veloc-
ity is finite but unbounded, which appears at spatial infinity at some time t (this
is the temporal inverse of an unboundedly accelerating particle that limits to an
infinite velocity in a finite time). Prior to t that particle had not been present in
the universe; hence the prior state does not determine the future state, since such a
‘space invader’ particle is possible. Of course, such a space invader is completely
unexpected—it is plausible, I think, to regard such an occurrence as completely
and utterly random and arbitrary. Randomness in this case does not capture some
potentially explanatory aspect of some process or phenomenon, but rather serves
to mark our recognition of complete capriciousness in the event.
More earthly examples are not hard to find. Shannon noted that when mod-
elling signal transmission systems, it is inappropriate to think that the only rele-
vant factors are the information transmitted and the encoding of that information
(Shannon and Weaver, 1949). There are physical factors that can corrupt the phys-
ical representation of that data (say, stray interference with an electrical or radio
signal). It is not appropriate or feasible to explicitly incorporate such disturbances
in the model, especially since they serve a purely negative role and cannot be
6
controlled for, only accommodated. These models therefore include a random
noise factor: random alterations of the signal with a certain probability distribu-
tion. All the models we have mentioned include noise as a confounding factor,
and it is a very general technique for simulating the pattern of disturbances even
in deterministic systems with no other probabilistic aspect. The randomness of
the noise is crucial: if it were not random, it could be explicitly addressed and
controlled for. As it stands, noise in signalling systems is addressed by complex
error-checking protocols, which, if they work, rely crucially on the random and
unsystematic distribution of errors. A further example is provided by the concept
of random mutation in classical evolutionary theory. It may be that, from a bio-
chemical perspective, the alterations in DNA produced by imperfect copying are
deterministic. Nevertheless, these mutations are random with respect to the genes
they alter, and hence the differential fitness they convey.
9
2 Concepts of Randomness
If we are to understand what randomness is, we must begin with the scientifically
acceptable uses of the concept. These form a rough picture of the intuitions that
scientists marshal when describing a phenomenon as random; our task is to sys-
tematise these intuitions as best we can into a rigorous philosophical analysis of
this intuitive conception.
Consider some of the competing demands on an analysis of randomness that
may be prompted by our examples.
(1) Statistical Testing We need a concept of randomness adequate for use in
random sampling and randomised experiments. In particular, we need to
be able to produce random sequences on demand, and ascertain whether a
given sequence is random.
(2) Finite Randomness The concept of randomness must apply to the single
event, as in Earman’s example or a single instance of random mating. It
must at least apply to finite phenomena.
(3) Explanation and Confirmation Attributions of randomness must be able
to be explanatorily effective, indicating why certain systems exhibit the
kinds of behaviour they do; to this end, the hypothesis that a system is
random must be amenable to incremental empirical confirmation or discon-
firmation.
9
Thanks to Spencer Maughan for the example.
7
(4) Determinism The existence of randomprocesses must be compatible with
determinism; else we cannot explain the use of randomness to describe pro-
cesses in population genetics or chaotic dynamics.
Confronted with this variety of uses of randomness to describe such varied phe-
nomena, one may be tempted to despair: “Indeed, it seems highly doubtful that
there is anything like a unique notion of randomness there to be explicated.”
(Howson and Urbach, 1993, 324). Even if one recognises that these demands
are merely suggested by the examples and may not survive careful scrutiny, this
temptation may grow stronger when one considers how previous explications
of randomness deal with the cases we described above. This we shall now do
with the two most prominent past attempts to define randomness: the place se-
lection/statistical test conception and the complexity conception of randomness.
Both do poorly in meeting our criteria; poorly enough that if a better account were
to be proposed, we should reject them.
2.1 Vox Mrsrs/Cntacn/Mxarrx-L¨ or Rxxuomxrss
Drrrxrrrox 1 (vox Mrsrs-Rxxuomxrss). An infinite sequence S of outcomes of
types A
1
, . . . , A
n
, is vM-random iff (i) every outcome type A
i
has a well-defined
relative frequency relf
S
i
in S; and (ii) for every infinite subsequence S

chosen
by an admissible place selection, the relative frequency of A
i
remains the same as
in the larger sequence: relf
S

i
= relf
S
i
.
Immediately, the definition only applies to infinite sequences, and so fails con-
dition (2) of finiteness.
What is an admissible place selection? Von Mises himself says:
[T]he question whether or not a certain member of the original sequence
belongs to the selected partial sequence should be settled independently of
the result of the observation, i.e. before anything is known about the result.
(von Mises, 1957, 25)
The intuition is that, if we pick out subsequences independently of the contents
of the elements we pick (by paying attention only to their indices, for example),
and each of those has the same limit relative frequencies of outcomes, then the se-
quence is random. If we could pick out a biased subsequence, that would indicate
that some set of indices had a greater than chance probability of being occupied
by some particular outcome; the intuition is that such an occurrence would not be
consistent with randomness.
Church (1940), attempting to make von Mises’ remarks precise, proposed that
admissible place selections are recursive functions that decide whether an element
8
s
i
is to be included in a subsequence on input of the index number i and the initial
segment of the sequence up to s
i−1
. For example, ‘select only the odd numbered
elements’, and ‘select any element that comes after the subsequence 010’ are both
admissible place selections. An immediate corollary is that no random sequence
can be recursively computable: else there would be a recursive function that would
choose all and only 1s fromthe initial sequence, namely, the function which gener-
ates the sequence itself. But if a random sequence cannot be effectively generated,
we cannot produce random sequences for use in statistical testing. Neither can we
effectively test, for some given sequence, whether it is random. For such a test
would involve exhaustively checking all recursive place selections to see whether
they produce a deviant subsequence, and this is not decidable in any finite time
(though for any non-random sequence, at some finite time the checking machine
will halt with a ‘no’ answer). If random sequences are neither producible or dis-
cernible, they are useless for statistical testing purposes, failing the first demand.
This point may be made more striking by noting that actual statistical testing only
ever involves finite sequences; and no finite sequence can be vM-random at all.
Furthermore, it is perfectly possible that some genuinely vM-random infinite
sequence has an arbitrarily biased initial segment, even to the point where all the
outcomes of the sequence that actually occur during the life of the universe are 1s.
A theorem of Ville (1939) establishes a stronger result: given any countable set
of place selections {ϕ
i
}, there is some infinite sequence S such that the limit fre-
quency of 1s in any subsequence of S

= ϕ
j
(S) selected by some place selection
is one half, despite the fact that for every finite initial segment of the sequence,
the frequency of 1s is greater than or equal to one half (van Lambalgen, 1987,
730–1,745–8). That is, any initial segment of this sequence is not random with
respect to the whole sequence or any infinite selected subsequence. There seems
to be no empirical constraint that could lead us to postulate that such a sequence
is genuinely vM-random. Indeed, since any finite sequence is recursively com-
putable, no finite segment will ever provide enough evidence to justify claiming
that the actual sequence of outcomes of which it is a part is random. That our ev-
idence is at best finite means that the claim that an actual sequence is vM-random
is empirically underdetermined, and deserving of a arbitrarily low credence be-
cause any finite sequence is better explained by some other hypothesis (e.g. that
it is produced by some pseudo-random function). vM-randomness is a profligate
hypothesis that we cannot be justified in adopting. Hence it can play no role in ex-
planations of random phenomena in finite cases, where more empirically tractable
hypotheses will do far better.
One possible exception is in those cases where we have a rigorous demon-
stration that the behaviour in question cannot be generated by a deterministic
system—in that case, the system may be genuinely vM-random. Even granting
the existence of such demonstrations, note that in this case we have made essential
9
appeal to a fact about the random process that produces the sequence, and we have
strictly gone beyond the content of the evidence sequence in making that appeal.
Here we have simply abandoned the quest to explain deterministic randomness.
Randomsequences may well exist for infinite strings of quantummechanical mea-
surement outcomes, but we don’t think that random phenomena are confined to
indeterministic phenomena alone: vM-randomness fails demand (4).
Partly in response to these kinds of worries, a final modification of Von Mises’
suggestion was made by Martin-Löf (1966, 1969, 1970). His idea is that biased
sequences are possible but unlikely: non-random sequences, including the types
of sequences considered in Ville’s theorem, form a set of measure zero in the set
of all infinite binary sequences. Martin-Löf’s idea is that truly random sequences
satisfy all the probability one properties of a certain canonical kind: recursive se-
quential significance tests—this means (roughly) that a sequence is random with
respect to some hypothesis H
p
about probability p of some outcome in that se-
quence if it does not provide grounds for rejecting H
p
at arbitrarily small levels
of significance.
10
Van Lambalgen (1987) shows that Martin-Löf (ML)-random
sequences are, with probability 1, vM-random sequences also—almost all strictly
increasing sets of integers (Wald place selections) select infinite subsequences of
a random sequence that preserve relative frequencies.
Finally, as Dembski (1991) points out, for the purposes of statistical testing,
“Randomness, properly to be randomness, must leave nothing to chance.” (p. 75)
This is the idea that in constructing statistical tests and random number generators,
the first thing to be considered is the kinds of patterns that one wants the random
object to avoid instantiating. Then one considers the kinds of objects that can be
constructed to avoid matching these tests. In this case, take the statistical tests you
don’t want your sequence to fail, and make sure that the sequence is random with
respect to these patterns. Arbitrary segments of ML-random sequences cannot
satisfy this requirement, since they must leave up to chance exactly which entities
come to constitute the random selection.
2.2 KCS-axxuomxrss
One aspect of random sequences we have tangentially touched on is that random
sequences are intuitively complex and disordered. Random mating is disorderly
at the level of individuals; random rainfall inputs are complex to describe. The
10
Consider some statistical test such as the χ
2
test. The probability arising out of the test is
the probability that chance alone could account for the divergence between the observed results
and the hypothesis; namely, the probability that the divergence between the observed sequence
and the probability hypothesis (the infinite sequence) is not an indication that the classification
is incorrect. A random sequence is then one that, even given an arbitrarily small probability that
chance accounts for the divergence, we would not reject the hypothesis.
10
other main historical candidate for an analysis of randomness, suggested by the
work of Kolmogorov, Chaitin and Solomonov (KCS), begins with the idea that
randomness is the (algorithmic) complexity of a sequence.
11
The complexity of a sequence is defined in terms of effective production of that
sequence.
Drrrxrrrox 2 (Comrirxrr.). The complexity K
T
(S) of sequence S is the length
of the shortest programme C of some Turing machine T which produces S as
output, when given as input the length of S. (K
T
(S) is set to ∞ is there does not
exist a C that produces S).
This definition is machine-dependent; some Turing machines are able to more
compactly encode some sequences. Kolmogorov showed that there exist universal
Turing machines U such that for any sequence S,
(1) ∀T ∃c
T
K
U
(S) K
T
(S) + c
T
,
where the constant c
T
doesn’t depend on the length of the sequence, and hence
can be made arbitrarily small as the length of the sequence increases. Such ma-
chines are known as asymptotically optimal machines. If we let the complexity
of a sequence be defined relative to such a machine, we get a relatively machine-
independent characterisation of complexity.
12
The upper bound on complexity of
a sequence of length l is approximately l—we can always resort to hard-coding
the sequence plus an instruction to print it.
Drrrxrrrox 3 (KCS-Rxxuomxrss). A sequence S is KCS-random if its complexity
is approximately its length: K(S) ≈ l(S).
13
One natural way to apply this definition to physical processes is to regard the
sequence to be generated as a string of successive outcomes of some such process.
11
A comprehensive survey of complexity and the complexity-based approach to randomness is
Li and Vitanyi (1997). See also Kolmogorov and Uspensky (1988), Kolmogorov (1963), Batter-
man and White (1996), Chaitin (1975), van Lambalgen (1995), Earman (1986, ch. VIII), Smith
(1998, ch. 9), Suppes (1984, pp. 25–33).
12
Though problems remain. The mere fact that we can give results about the robustness of
complexity results (namely, that lots of universal machines will give roughly the same complexity
value to any given sequence) doesn’t really get around the problem that any particular machine
may well be biased with respect to some particular sequence (Hellman, 1978, Smith, 1998).
13
A related approach is the so-called ‘time-complexity’ view of randomness, where it is not
the space occupied by the programme, but rather the time it takes to compute its output given
its input. Sequences are time-random just in case the time taken to compute the algorithm and
output the sequence is greater than polynomial in the size of the input. Equivalently, a sequence
is time-random just when all polynomial time algorithms fail to distinguish the putative random
string from a real random string (equivalent because a natural way of distinguishing random from
pseudo-random is by computing the function) (Dembski, 1991, 84).
11
In dynamical systems, this would naturally be generated by examining trajectories
in the system: sequences that list the successive cells (of some partition of the state
space) that are traversed by a systemover time. KCS-randomness is thus primarily
a property of trajectories. This notion turns out to be able to be connected with a
number of other mathematical concepts that measure some aspects of randomness
in the context of dynamical systems.
14
This definition fares markedly better with respect to some of our demands than
vM-randomness. Firstly, there are finite sequences that are classified as KCS-
random. For each l, there are 2
l
binary sequences of length l. But the non-KCS-
random sequences amongst them are all generated by programmes of less than
length l − k, for some k; hence there will be at most 2
l−k
programmes which
generate non-KCS-random sequences. But the fraction 2
l−k
/2
l
= 1/2
k
; so the
proportion of non-KCS-random sequences within all sequences of length l (for
all l) decreases exponentially with the degree of compressibility demanded. Even
for very modest compression in large sequences (say, k = 20, l = 1000) less than
1 in a million sequences will be non-KCS-random. It should, I think, trouble us
that, by the same reasoning, longer sequences are more KCS-random. This means
that single element sequences are not KCS-random, and so the single events they
represent are not KCS-random either.
15
It should also disturb us that biased sequences are less KCS-random than un-
biased sequences (Earman, 1986, 143–5). A sequence of tosses of a biased coin
(e.g. Pr(H) > 0.5) can be expected to have more frequent runs of consecutive 1s
than an unbiased sequence; the biased sequence will be more compressible. A
single 1 interrupting a long sequence of 0s is even less KCS-random. But in each
of these cases, intuitively, the distribution of 1s in the sequence can be as random
as desired, to the point of satisfying all the statistical significance tests for their
probability value. This is important because stochastic processes occur with arbi-
trary underlying probability distributions, and randomness needs to apply to all of
them: intuitively, random mating would not be less random were the distribution
over genotypes non-uniform.
What about statistical testing? Here, again, there are no effective computa-
tional tests for KCS-randomness, nor any way of effectively producing a KCS-
random sequence.
16
This prevents KCS-random sequences being effectively use-
14
For instance, Brudno’s theorem establishes a connection between KCS-randomness and what
is known as Kolmogorov-Sinai entropy, which has very recently been given an important role in
detecting randomness in chaotic systems. See Frigg (2004, esp. 430).
15
There are also difficulties in extending the notion to infinite sequences, but I consider these
far less worrisome in application (Smith, 1998, 156–7).
16
There does not exist an algorithm which on input k yields a KCS-random sequence S as
output such that |S| = k; nor does there exist an algorithm which on input S yields output 1
iff that sequence is KCS-random (van Lambalgen, 1995, 10–1). This result is a fairly immediate
12
ful in random sampling and randomisation. Furthermore, the lack of an effective
test renders the hypothesis of KCS-randomness of some sequence relatively im-
mune to confirmation or disconfirmation.
One suggestion is that perhaps we were mistaken in thinking that KCS com-
plexity is an analysis of randomness; perhaps, as Earman (1986) suggests, it ac-
tually is an analysis of disorder in a sequence, irrespective of the provenance of
that sequence. Be that as it may, the problems above seem to disqualify KCS-
randomness from being a good analysis of randomness. (Though random phe-
nomena typically exhibit disorderly behaviour, and this may explain how these
concepts became linked, this connection is neither necessary or sufficient.)
3 Randomness is Unpredictability: Preliminaries
Perhaps the foregoing survey of mathematical concepts of randomness has con-
vinced you that no rigorously clarified concept can meet every demand on the
concept of randomness that our scientific intuitions place on it. Adopting a best
candidate theory of content (Lewis, 1984), one may be drawn to the conclusion
that no concept perfectly fills the role delineated by our four demands, and one
may then settles on (for example) KCS-randomness as the best partial filler of the
randomness role.
Of course this conclusion only follows if there is no better filler of the role. I
think there is. My hypothesis is that scientific randomness is best analysed as a
certain kind of unpredictability. I think this proposal can satisfy each of the de-
mands that emerge fromour quick survey of scientific applications of randomness.
Before I can state my analysis in full, some preliminaries need to be addressed.
3.1 Paocrss xxu Paoutcr Rxxuomxrss
The mathematical accounts of randomness we addressed do not, on the surface,
make any claims about scientific randomness. Rather, these accounts invite us
to infer, from the randomness of some sequence, that the process underlying that
sequence was random (or that an event produced by that process and part of that
sequence was random). Our demands were all constraints on random processes,
requiring that they might be used to randomise experiments, to account for ran-
dom behaviour, and that they might underlie stochastic processes and be compat-
ible with determinism. Our true concern, therefore, is with process randomness,
not product randomness (Earman, 1986, 137–8). Our survey showed that the in-
ference from product to process randomness failed: the class of processes that
corollary of the unsolvability of the halting problem for Turing machines.
13
possess vM-random or KCS-random outcome sequences fails to satisfy the intu-
itive constraints on the class of random processes.
17
Typically, appeals are made at this point to theorems which show that ‘al-
most all’ random processes produce random outcome sequences, and vice versa
(Frigg, 2004, 431). These appeals are beside the point. Firstly, the theorems
depend on quite specific mathematical details of the models of the systems in
question, and these details do not generalise to all the circumstances in which ran-
domness is found, giving such theorems very limited applicability. Secondly, even
where these theorems can be established, there remains a logical gap between pro-
cess randomness and product randomness, some random processes exhibit highly
ordered outcomes. Such a possibility surely contradicts any claim that product
randomness and process randomness are ‘extensionally equivalent’ (Frigg, 2004,
431).
What is true is that product randomness is a defeasible incentive to inquire into
the physical basis of the outcome sequence, and it provides at least a prima facie
reason to think that a process is random. Indeed, this presumptive inference may
explain much of the intuitive pull exercised by the von Mises and KCS accounts
of randomness. For, insofar as these accounts do capture typical features of the
outputs of random processes, they can appear to give an analysis of randomness.
But this presumptive inference can be defeated; and even the evidential status of
random products is less important than it seems—on my account, far less stringent
tests than von Mises or KCS can be applied that genuinely do pick out the random
processes.
3.2 Rxxuomxrss rs Ixurrramrxrsm?
The comparative neglect of the concept of randomness by philosophers is in large
part due, I think, to the pervasive belief in the pernicious hypothesis that a physical
process is random just when that process is indeterministic. Hellman, while con-
curring with our conclusion that no mathematical definition of random sequence
can adequately capture physical randomness, claims that ‘physical randomness’
is “roughly interchangeable with ‘indeterministic”’ (Hellman, 1978, 83).
Indeterminism here means that the complete and correct scientific theory of
the process is indeterministic. A scientific theory we take to be a class of models
(van Fraassen, 1989, ch. 9). An individual model will be a particular history of
the states that a system traverses (a specification of the properties and changes in
properties of the physical system over time): call such a history a trajectory of the
17
There is some psychological research which seems to indicate that humans judge randomness
of sequences by trying to assimilate them to representative outcomes of random processes. Any
product-first conception of randomness will have difficulty explaining this clearly deep-rooted
intuition (Griffiths and Tenenbaum, 2001).
14
system. The class of all possible trajectories is the scientific theory. Two types
of constraints govern the trajectories: the dynamical laws (like Newton’s laws of
motion) and the boundary conditions (like the Hamiltonian of a classical system
restricts a given history to a certain allowable energy surface) govern which states
can be accessed from which other states, while the laws of coexistence and bound-
ary conditions determine which properties can be combined to form an allowable
state (for instance, the idea gas law PV = nrT constrains which combinations of
pressure and volume can coexist in a state). This model of a scientific theory is
supposed to be very general: the states can be those of the phase space of classical
statistical mechanics, or the states of soil moisture, or of a particular genetic dis-
tribution in a population, while the dynamics can include any mappings between
states.
18
Drrrxrrrox 4 (Examxx-Moxrxotr Drrramrxrsm). A scientific theory is determin-
istic iff any two trajectories in models of that system which overlap at one point
overlap at every point. A theory is indeterministic iff it is not deterministic; equiv-
alently, if two systems can be in the same state at one time and evolve into distinct
states. A system is (in)deterministic iff the theory which completely and correctly
describes it is (in)deterministic. (Earman, 1986, Montague, 1974)
Is it plausible that the catalogue of random phenomena we began with can
be simply unified by the assumption that randomness is indeterminism? It seems
not. Many of the phenomena we enumerated do not seem to depend for their ran-
domness on the fact that the world in which they are instantiated is one where
quantum indeterminism is the correct theory of the microphysical realm. One can
certainly imagine that Newton was right. In Newtonian possible worlds, the kinds
of random phenomena that chaotic dynamics gives rise to are perfectly physically
possible; so too with random mating, which depends on a high-level probabilis-
tic hypothesis about the structure of mating interactions, not low-level indeter-
minism.
19
Our definition of indeterminism made no mention of the concept of
probability; an adequate understanding of randomness, on the other hand, must
show how randomness and probability are related—hence indeterminism cannot
be randomness. Moreover, we must at least allow for the possibility that quantum
mechanics will turn out to be deterministic, as on the Bohm theory (Bell, 1987b).
Finally, it seems wrong to say that coin tossing is indeterministic, or that creatures
18
Some complications are induced if one attempts to give this kind of account for relativistic
theories without a unique time ordering, but these are inessential for our purposes (van Fraassen,
1989).
19
There are also purported proofs of the compatibility of randomness and indeterminism
(Humphreys, 1978). I don’t think that the analysis of randomness utilised in these formal proofs
is adequate, so I place little importance on these constructions.
15
engage in indeterministic mating: it would turn out to be something of a philo-
sophical embarrassment if the only analysis our profession could provide made
these claims correct.
One response of behalf of the pernicious hypothesis is that, while classical
physics is deterministic, it is nevertheless, on occasion, a useful idealisation to
pretend that a given process is indeterministic, and hence random.
20
I think that
this response confuses the content of concepts deployed within a theory, like the
concept of randomness, with the external factors that contribute to the adoption
of a theory, such as that theory being adequate for the task at hand, and therefore
being a useful idealisation. Classical statistical mechanics does not say that it is
a useful idealisation that gas motion is random; the theory is an idealisation that
says gas motion is random, simpliciter. Here, I attempt to give a characterisation
of randomness that is uniform across all theories, regardless of whether those
theories are deployed as idealisations or as perfectly accurate descriptions.
We must also be careful to explain why the hypothesis that randomness is
indeterminism seems plausible to the extent that it does. I think that the historical
connection of determinism with prediction in the Laplacean vision can explain
the intuitive pull of the idea that randomness is objective indeterminism. I believe
that a historical mistake still governs our thinking in this area, for when increasing
conceptual sophistication enabled us to tease apart the concepts of determinism
and predictability, randomness remained connected to determinism, rather than
with its rightful partner, predictability. It is to the concept of predictability that we
now turn.
4 Predictability
The Laplacean vision is that determinism is idealised predictability:
[A]n intelligence which could comprehend all the forces by which nature is
animated and the respective situation of all the [things which] compose it—
an intelligence sufficiently vast to submit these data to analysis—it would
embrace in the same formula the movements of the greatest bodies in the
universe and those of the lightest atom; for it, nothing would be uncertain
and the future, as well as the past, would be present to its eyes.
(Laplace, 1951, 4)
Drrrxrrrox 5 (Lxrixcrxx Drrramrxrsm). A system is Laplacean deterministic iff it
would be possible for an epistemic agent who knew precisely the instantaneous
20
John Burgess suggested the possibility of this response to me—and pointed out that some
remarks below (particularly §§4.3 and 6.3) might seem to support it.
16
state and could analyse the dynamics of that system to predict with certainty the
entire precise trajectory of the system.
A Laplacean deterministic system is where the epistemic features of some ideal
agent cohere perfectly with the ontological features of that world. Given that there
are worlds where prediction and determinism mesh in this way, it is easy to think
that prediction and determinism are closely related concepts.
21
There are two main ways to make the features of this idealised epistemic agent
more realistic that would undermine this close connection. The first way is to try
and make the epistemic capacities of the agent to ascertain the instantaneous state
more realistic. The second way is to make the computational and analytic capac-
ities of the agent more realistic. Weakening the epistemic abilities of the ideal
agent allows us to clearly see the separation of predictability and determinism.
22
4.1 Errsrrmrc Coxsraxrxrs ox Parurcrrox
The first kind of constraint to note concerns our ability to precisely ascertain the
instantaneous state of a system. At best, we can establish that the system was in a
relatively small region of the state space, over a relatively short interval of time.
There are several reasons for this. Most importantly, we humans are limited in
our epistemic capabilities. Our measurement apparatus is not capable of arbitrary
discrimination between different states, and is typically only able to distinguish
properties that correspond to quite coarse partitions of the state space. In the
case of classical statistical mechanics of an ideal gas in a box, the standard parti-
tion of the state space is into regions that are macroscopically distinguishable by
means of standard mechanical and thermodynamic properties: pressure, temper-
ature, volume. We are simply not capable of distinguishing states that can differ
by arbitrarily little: one slight shift of position in one particle in a mole of gas.
In such cases, with even one macrostate compatible with more than one indis-
tinguishable microstate, predictability for us and determinism do not match; our
epistemic situation is typically worse than this.
23
21
An infamous example of this is the bastardised notion of ‘epistemological determinism’,
as used by Popper (1982)—which is no form of indeterminism at all. The unfortunately named
distinction between ‘deterministic’ and ‘statistical’ hypotheses, actually a distinction concerning
the predictions made by theories, is another example of this persistent confusion (Howson, 2000,
102–3).
22
For more on this, see Bishop (2003), Earman (1986, ch. 1), Schurz (1995), Stone (1989).
23
Note that, frequently, specification of the past macroscopic history of a system together with
it present macrostate, will help to make its present microstate more precise. This is because the
past history can indicate something about the bundle of trajectories upon which the system might
be. These trajectories may not include every point compatible with the currently observed state.
In what follows, we will consider the use of this historical constraint to operate to give a more
precise characterisation of the current state, rather than explicitly considering it.
17
There is an ‘in principle’ restriction too. Measurement involves interactions: a
system must be disturbed, ever so slightly, in order for it to affect the system that
is our measurement device. We are forced to meddle and manipulate the natural
world in ways that render uncertain the precise state of the system. This has two
consequences. Firstly, measurement alters the state of the system, meaning we are
never able to know the precise pre-measurement state (Bishop, 2003, §5). This is
even more pressing if we consider the limitations that quantum mechanics places
on simultaneous measurement of complementary quantities. Secondly, measure-
ment introduces errors into the specification of the state. Repetition does only so
much to counter these errors; physical magnitudes are always accompanied by
their experimental margin of error.
It would be a grave error to think that the in principle limitations are the more
significant restrictions on predictions. On the contrary: prediction is an activity
that arose primarily in the context of agency, where having reasonable expecta-
tions about the future is essential for rational action. Creatures who were not goal
directed would have no use for predictions. As such, an adequate account of pre-
dictability must make reference to the actual abilities of the epistemic agents who
are deploying the theories to make predictions. An account of prediction which
neglected these pragmatic constraints would thereby leave out why the concept of
prediction is important or interesting at all (Schurz, 1995, §6).
A nice example of the consequences of imprecise specification of initial con-
ditions is furnished by the phenomenon from chaotic dynamics known as sensitive
dependence on initial conditions, or ‘error inflation’ (Smith, 1998, pp. 15, 167–8).
Consider some small bundle of initial states S , and some state s
0
∈ S . Then, for
some systems,
(2) ∃ > 0 ∀δ > 0 ∃s

0
∈ S ∃t > 0

|s
0
− s

0
| < δ ∧ |s
t
− s

t
| >

.
That is, for some bundle of state space points that are within some arbitrary dis-
tance δ in the state space, there are at least two states whose subsequent trajec-
tories diverge by at least after some time t. In fact, for typically chaotic sys-
tems, all neighbouring trajectories within the bundle of states diverge exponen-
tially fast. Predictability fails; knowledge of initial macrostates, no matter how
fine grained, can always leave us in a position where the trajectories traversing
the microstates that compose that initial macrostate each end up in a completely
different macrostate, giving us no decisive prediction.
How well can we accommodate this behaviour? It turns out then that pre-
dictability in such cases is exponentially expensive in initial data; to predict even
one more stage in the time evolution of the system demands an exponential in-
crease in the accuracy of the initial state specification. Given limits on the ac-
curacy of such a specification, our ability to predict will run out in a very short
18
time for lots of systems of very moderate complexity of description, even if we
have the computational abilities. However (and this will be important in the se-
quel) we can predict global statistical behaviour of a bundle of trajectories. This
is typically because our theory yields probabilities of state transitions from one
macrostate into another.
24
This combination of global structure and local insta-
bility is an important conceptual ingredient in randomness (Smith, 1998, ch. 4).
Bishop (2003) makes the plausible claim that any error in initial measurement will
eventually yield errors in prediction, but exponential error inflation is a particu-
larly spectacular example.
4.2 Comrtrxrroxxi Coxsraxrxrs ox Parurcrrox
There may also be constraints imposed by our inability to track the evolution of a
system along its trajectory. Humphreys’ (1978) purported counterexamples to the
thesis that randomness is indeterminism relied on the following possibility: that
the total history of a system may supervene on a single state, hence the system is
deterministic, while no computable sequence of states is isomorphic to that his-
tory. Given the very plausible hypothesis that human predictors have at best the
computation capacities of Turing machines, this means that some state evolutions
are not computable by predictors like us. This is especially pronounced when the
dynamical equations governing of the system are not integrable and do not ad-
mit of a closed-form solution (Stone, 1989). Predictions of future states when the
dynamics are based on open-form solutions are subject to ever-increasing com-
plexity as the time scale of the prediction increases.
There is a sense in which all deterministic systems are computable: each sys-
tem does effectively produce its own output sequence. If we are able (per impos-
sibile) to arbitrarily control the initial conditions, then we could use the system
itself as an ‘analogue computer’ that would simulate its own future behaviour.
This, it seems to me, would be prediction by cheating. What we demand of a
prediction is the making of some reasonable, theoretically-informed judgement
about the unknown behaviour of a system—not remembering how it behaved in
the past. (Similarly, predicting by consulting a reliable oracle is not genuine pre-
diction either.) I propose that, for our purposes, we set prediction by cheating
aside as irrelevant.
An important issue for computation of predictions is the internal discrete rep-
resentation of continuous physical magnitudes; this significant problem is com-
pletely bypassed by analogue computation (Earman, 1986, ch. VI). This approach
24
We can also use shadowing theorems (Smith, 1998, 58–60), and knowledge of chaotic pa-
rameter values.
19
also neglects more mundane restrictions on computations: our finite lifespan, re-
sources, memory and patience!
4.3 Paxomxrrc Coxsraxrxrs ox Parurcrrox
There are also constraints placed on prediction by the structure of the theory yield-
ing the predictions. Consider thermodynamics. This theory gives perfectly ade-
quate dynamical constraints on macroscopic state conditions. But it does not suf-
fice to predict a state that specifies the precise momentum and position of each
particle; those details are ‘invisible’ to the thermodynamic state. Some features of
the state are thus unpredictable because they are not fixed by the theory’s descrip-
tion of the state.
This is only of importance because, on occasion, this is a desirable feature of
theory construction. A theory of population genetics might simply plug in the pro-
viso that mating happens unpredictably, where this is to be taken as saying that, for
the purposes of the explanatory and predictive tasks at hand, it can be effectively
treated as such. It is more perspicuous not to attempt to explain this higher-order
stochastic phenomenon in terms of lower level theories. This is part of a general
point about the explanatory significance of higher-level theories, but it has par-
ticular force for unpredictability. Some theories don’t repay the effort required
to make predictions using them, even if those theories could, in principle, predict
with certainty. Other theories are more simple and effective because various de-
terministic phenomena are treated as absolutely unpredictable. A random aspect
of the process is perhaps to be seen as a qualitative factor in explanation of some
quite different phenomenon; or as an ancillary feature not of central importance
to the theory; or it might simply be proposed as a central irreducible explanatory
hypothesis, whose legitimacy derives from the fruitfulness of assuming it. Given
that explanation and prediction are tasks performed by agents with certain cogni-
tive and practical goals in hand (van Fraassen, 1980), the utility of some particular
theory for such tasks will be a matter of pragmatic qualities of the theory.
4.4 Parurcrrox Drrrxru
Given these various constraints, I will now give a general characterisation of the
predictability of a process.
Drrrxrrrox 6 (Parurcrrox). A prediction function ψ
P,T
(M, t) takes as input the cur-
rent state M of a system described by a theory T as discerned by a predictor P,
and an elapsed time parameter, and yields a temporally-indexed probability dis-
tribution Pr
t
over the space of possible states of the system. A prediction is a
specific use of some prediction function by some predictor on some initial state
20
and elapsed time, who then adopts Pr
t
as their posterior credence function (condi-
tional on the evidence and the theory). (If the elapsed time is negative, the use is
a retrodiction.)
Let us unpack this a little. Consider a particular system that has been as-
certained to be in some state M at some time. The states are supposed to be
distinguished by the epistemic capacities of the predictors, so that in classical me-
chanics, for example, the states in question will be macrostates, individuated by
differences in observable parameters such as temperature or pressure. A predic-
tion is an attempt to establish what the probability is that the system will be in
some other state after some time t has elapsed.
25
The way such a question is an-
swered, on my view, is by deploying a function of a kind whose most general
form is a prediction function. The agent P who wishes to make the prediction has
some epistemic and computational capabilities; these delimit the fine-grainedness
of the partition of which M is a member, and the class of possible functions. The
theory T gives the basic ingredients for the prediction function, establishing the
physical relations between states of the theory accepted by the agent. These are
contextual features that are fixed by the surroundings in which the prediction is
made: the epistemic and computational limitations of the predictor and the the-
ory being utilised are presuppositions of the making of a prediction (Stalnaker,
1984). These contextual features fix a set of prediction functions that are avail-
able to potential predictors in that context. The actual prediction, however, is the
updating of credences by the predictor who conditions on observed evidence and
accepted theory, which jointly dictate the prediction functions that are available to
the predictor.
The notion of an available prediction function may need some explanation.
Clearly, the agent who updates by simply picking some future event and giving
it credence 1 is updating his beliefs in future outcomes in a way that meets the
definition of a prediction function. Nevertheless, this prediction function is (most
likely) inconsistent with the theory the agent takes to most accurately describe the
situation he is concerned to predict, unless that agent adopts a very idiosyncratic
theory. As such, it is accepted theory and current evidence which are to be taken
as basic; these fix some prediction functions as reasonable for the agents who be-
lieve those theories and have observed that evidence, and it is those reasonable
prediction functions that are available to the agent in the sense I have discussed
here. Availability must be a normative notion; it cannot be, for example, that a
prediction function is available if an agent could update their credences in accor-
dance with its dictates; it must also be reasonable for the agent to update in that
25
A perfect, deterministic prediction is the degenerate case where the probability distribution
is concentrated on a single state (or a single cell of a partition).
21
way, given their other beliefs.
26
Graham Priest suggested to me that the set of prediction functions be all recur-
sive functions on the initial data, just so as to make the set of available predictions
the same for all agents. But I don’t think we need react quite so drastically, espe-
cially since to assume the availability of these functions is simply to reject some
of the plausible computational limitations on human predictions.
This conception of prediction has its roots in consideration of classical statis-
tical mechanics, but the use of thermodynamic macrostates as a paradigm for the
input state M may skew the analysis with respect to other theories.
27
The input
state M must include all the information we currently possess concerning the sys-
tem whose behaviour is to be predicted. This might include the past history of the
system, for example when we use trends in the stockmarket as input to our pre-
dictive economic models. It must also include some aspects of the microstate of
the system, as in quantum mechanics, where the uniform initial distribution over
phase space in classical statistical mechanics is unavailable, so all probabilities of
macroscopic outcomes are state dependent. Sometimes we must also include rel-
evant knowledge or assumptions about other potentially interacting systems. This
holds not only in cases where we assume that a system is for all practical purposes
closed or isolated, but also in special relativity, where we can only predict future
events if we impose boundary conditions on regions spacelike separated from us
(and hence outside our epistemic access), for example that those regions are more
or less like our past light cone. So the input state must be broader than just the cur-
rent observations of the system, and it must include all the ingredients necessary,
whatever those might be, to fix on a posterior probability function.
The relation of the dynamical equations of the theory to the available predic-
tion functions is an important issue. The aim of a predictive theory is to yield
useful predictions by means of a modified dynamics that is not too false to the
underlying dynamics. For some theories, the precise states will be ascertainable
and the dynamical equations solvable; the prediction functions in this case will
just be the dynamical equations used in the theory, and the probability distribu-
tion over final states will be concentrated on a point in the deterministic case, or
given by the basic probabilistic rule in the indeterministic case (say, Born’s rule
in elementary quantum theory). Other cases are more complicated. In classical
statistical mechanics, we have to consider how the entire family of trajectories
that intersect M (i.e. overlap the microstates s that constitute M) behave under the
dynamical laws, and whether tractable functions that approximate this behaviour
can be found. For instance, the very simple prediction function for ergodic statis-
tical mechanical systems is that the probability of finding a system in some state
26
I thank Adam Elga for discussion of this point.
27
As Hans Halvorson pointed out to me.
22
M after sufficient time has elapsed is the proportion of the phase space that M
occupies. This requires a great many assumptions and simplifications, ergodicity
prominent among them, and each theory will have different requirements. The
general constraints seem to be those laid down in the preceding subsections, but
no more detailed universal recipe for producing prediction functions can be given.
In any case, the particular form of prediction functions is a matter for physical
theory; the logical properties of such a function are those I have specified above.
Of course, whether any function that meets these formal requirements is a
useful or good prediction function is another matter. A given prediction function
can yield a distribution that gives probability one to the whole state space, but no
information about probabilities over any more fine grained partition. Such a func-
tion, while perfectly accurate, is pragmatically useless and should be excluded by
contextual factors. In particular, I presume that the predictor wishes to have the
most precise partition of states that is compatible with accurate prediction. But
the tradeoff between accuracy and fine-grainedness will depend on the situation
in hand.
The ultimate goal, of course, is that the probability distribution given by the
prediction function will serve as normative for the credences of the agents mak-
ing the prediction (van Fraassen, 1989, 198). The probabilities are matched with
the credence by means of a probability coordination rule, of which the Principal
Principle is the best known example (Lewis, 1980). This is essential in explaining
how predictions give rise to action, and is one important reason why the outcomes
of a prediction must be probabilistic. Another is that we can easily convert a prob-
ability distribution over states into an expectation value for the random variable
that represents the unknown state of the system. Prediction can then be described
as yielding expectation values for some system given an estimation of the cur-
rent values that characterise the system, which enables a large body of statistical
methodology to come to bear on the use and role of predictions.
28
5 Unpredictability
With a characterisation of predictability in hand, we are in a position to charac-
terise some of the ways that predictability can fail. Importantly, since we have
separated predictability from determinism, it turns out that being indeterministic
is one way, but not the only way, in which a phenomenon can fail to be predictable.
Drrrxrrrox 7 (Uxrarurcrxarirr.). An event E (at some temporal distance t) is un-
predictable for a predictor P iff P’s posterior credence in E after conditioning on
current evidence and the best prediction function available to P is not 1—that is,
28
For a start, see Jeffrey (2004), especially ch. 4.
23
if the prediction function yields a posterior probability distribution that doesn’t
assign probability 1 to E.
29
There is some worry that this definition is too inclusive—after all, there are
many future events that are intuitively predictable and yet we are not certain that
they will occur. This worry can be assuaged by attending to the following two
considerations. Firstly, this definition captures the idea that an event is not per-
fectly predictable. If the available well-confirmed prediction function allows us to
considerably raise our posterior credence in the event, we might well be willing
to credit it with significant predictive powers, even though it does not convey cer-
tainty on the event. This only indicates that between perfect predictability, and the
kind of unpredictability we shall call randomness (below, §6), there are greater
or lesser degrees of unpredictability. Often, in everyday circumstances, we are
willing to collapse some of these finer distinctions: we are willing, for example,
to make little distinction between certainty and very high non-unity credences.
(This is at least partially because the structure of rational preference tends to ob-
scure these slight differences which make no practical difference to the courses
of action we adopt to achieve our preferred outcomes.) It is therefore readily un-
derstood that common use of the concept of unpredictability should diverge from
the letter, but I suggest not the spirit, of the definition given above. Secondly, we
must recognise that when we are prepared to use a theory to predict some event,
and yet reserve our assent from full certainty in the predictions made, what we
express by that is some degree of uncertainty regarding the theory. Our belief in
and acceptance of theories is a complicated business, and we frequently make use
of and accept theories that we do not believe to be true. Some of what I have
to say here about pragmatic factors involved in prediction reflects the complexi-
ties of this matter. But regardless of our final opinion on acceptance and use of
theories, it remains true that our conditional credences concerning events, condi-
tional on the truth of those theories, capture the important credential states as far
as predictability is concerned. So, many events are predictable according to the
definition above, because conditional credence in the events is 1, conditional on
the simple theories we use to predict them. But we nevertheless refrain from full
certainty because we are not certain of the simple theory. The point is that predic-
tion as I’ve defined it concerns what our credences would be if we discharged the
condition on those credences, by coming to believe the theory with certainty; and
this obviously simplifies the actual nature of our epistemic relationship with the
29
Note, in passing, that this definition does not make biased sequences any more predictable
than unbiased ones, just because some outcome turns up more often. Unpredictability has to
do with our expectations; and in cases of a biased coin we do expect more heads than tails, for
example. We still can’t tell what the next toss will be to any greater precision than the bias we
might have deduced; hence it remains unpredictable.
24
theories we accept.
An illustration of the definition in action is afforded by the case of indetermin-
ism, the strongest form of unpredictability. If the correct theory of some system is
indeterministic, then we can imagine an epistemic agent of perfect computational
and discriminatory abilities, for whom the contextually salient partition on state
space individuates single states, and who believes the correct theory. An event is
unpredictable for such an agent just in case knowledge of the present state does
not concentrate posterior credence only upon states containing the event. If the
theory is genuinely indeterministic there exist lawful future evolutions of the sys-
temfromthe current state to each of incompatible future states S and S

. If there is
any event true in S but not in S

, that event will be unpredictable. Indeed, if an in-
deterministic theory countenances any events that are not instantiated everywhere
in the state space, then those events will be unpredictable.
It is important to note that predictability, while relative to a predictor, is a
theoretical property of an event. It is the available prediction functions for some
given theory that determine the predictions that can be made from the perspec-
tive of that theory. It is the epistemic and computational features of predictors
that fix the appropriate theories for them to accept—namely, predictors accept
theories which partition the state space at the right level of resolution to fit their
epistemic capacities, and provide prediction functions which are well-matched to
their computational abilities. In other words, the level of resolution and the al-
lowed computational expenditure are parameters of predictability, and there will
be different characteristic or typical parameters for creatures of different kinds,
in different epistemic communities. This situation provides another perspective
on the continued appeal of the thesis that randomness is indeterminism. Theories
which describe unpredictable phenomena, on this account, treat those phenomena
as indeterministic. The way that the theory represents some situation s is the same
as the theory represents some distinct situation s

, but the way the theory repre-
sents the future evolutions of those states t(s) and t(s

) are distinct, so that within
the theory we have duplicate situations evolving to distinct situations.
It is easy to see how the features that separate prediction from determinism
also lead to failures of predictability. The limited capacities of epistemic agents to
detect differences between fundamental detailed states, and hence their limitation
to relatively coarse-grained partitions over the state space, lead to the possibility
of diverging trajectories from a single observed coarse state even in determinis-
tic systems. Then there will exist events that do not get probability one and are
hence unpredictable. Note that one and the same type of event can be predicted at
one temporal distance, and unpredictable at another, if the diverging trajectories
require some extended interval of time to diverge from each other.
If the agent does not possess the computational capacities to utilise the most
accurate prediction functions, they may be forced to rely on simplified or approx-
25
imate methods. If these techniques do lead to predictions of particular events
with certainty, then either (contra the assumption) the prediction function is not
a simplification or approximation at all, or the predictions will be incorrect, and
the prediction functions should be rejected. To avoid rejecting prediction func-
tions that make incorrect but close predictions, those functions should be made
compatible with the observed outcomes by explicitly considering the margins of
error on the approximate predictions. Then the outputs of such functions can in-
clude the actual outcome, as well as various small deviations from actuality—they
avoid conclusive falsification by predicting approximately which state will result.
If such approximate predictions can include at least a pair of mutually exclusive
events, then we have unpredictability with respect to those events.
Finally, if the agent accepts a theory for pragmatic reasons, then that may in-
duce a certain kind of failure of predictability, because the agent has restricted
the range of available prediction functions to those that are provided by the the-
ory subject to the agent’s epistemic and computational limitations. An agent who
uses thermodynamics as his predictive theory in a world where classical statisti-
cal mechanics is the correct story of the microphysics thereby limits her ability
to predict outcomes with perfect accuracy (since there are thermodynamically in-
distinguishable states that can evolve into thermodynamically distinguishable out-
comes, if those initial states are statistical-mechanically distinguishable). Theo-
ries also make certain partitions of the real state space salient to predictors (the so-
called level of description that the theory operates at), and this can lead to failures
of predictability in much the same way as epistemic restrictions can (even though
the agents might have other, pragmatic, reasons for adopting those partitions as
salient—for instance, the explanatory value of robust macroscopic accounts).
6 Randomness is Unpredictability
We are now in a position to discuss my proposed analysis. The views suggested
by Suppes and Kyburg in the epigraphs to this paper provide some support for
this proposal—philosophical intuition obviously acknowledges some epistemic
constraints on legitimate judgements of randomness. I think that these epistemic
features, derived from pragmatic and objective constraints on human knowledge,
exhaust the concept of randomness.
As I discussed earlier, some events which satisfy my definition of unpre-
dictability are only mildly unpredictable. For instance, if the events are distin-
guished in a fine-grained way, and the prediction concentrates the posterior prob-
ability over only two of those events, then we may have a very precise and accu-
rate prediction, even if not perfect. These failures of prediction do not, intuitively,
produce randomness. So what kind of unpredictability do I think randomness is?
26
The following definition captures my proposal: randomness is maximal un-
predictability.
Drrrxrrrox 8 (Rxxuomxrss). An event E is random for a predictor P using theory
T iff E is maximally unpredictable. An event E is maximally unpredictable for P
and T iff the posterior probability of E, yielded by the prediction functions that T
makes available, conditional on current evidence, is equal to the prior probability
of E. This also means that P’s posterior credence in E, conditional on theory
and current evidence (the current state of the system), must be equal to P’s prior
credence in E conditional only on theory.
We may call a process random, by extension, if each of the outcomes of the
process are random. So rainfall inputs constitute a random process because the
timing and magnitude of each rainfall event is random.
30
That is, since the out-
comes of a process {E
1
, . . . , E
n
} partition the event space, the posterior probability
distribution (conditional on theory and evidence) is identical to the prior probabil-
ity distribution.
31
This definition and its extension immediately yields another, very illuminat-
ing, way to characterise randomness: a random event is probabilistically inde-
pendent of the current and past states of the system, given the probabilities sup-
ported by the theory (when those current and past states are in line with the coarse-
graining of the event space appropriate for the epistemic and pragmatic features
of the predictor). The characteristic random events, on this construal, are the suc-
cessive tosses of a coin: independent trials, identically distributed because the
theory which governs each trial is the same, and the current state is irrelevant
to the next or subsequent trials—a so-called Bernoulli process. But the idea of
randomness as probabilistic independence is of far wider application than just to
these types of cases, since any useful prediction method aims to uncover a signifi-
cant correlation between future outcomes and present evidence, which would give
probabilistic dependence between outcomes and input states. This connection be-
tween unpredictability and probabilistic independence is in large part what allows
30
To connect up with our previous discussions, a sequence of outcomes is random just in case
those outcomes are the outcomes of a random process. This is perfectly compatible with those
outcomes being a very regular sequence; it is merely unlikely to be such a sequence.
31
At this point, it is worth addressing a putative counterexample raised by Andy Egan. A pro-
cess with only one possible outcome is random on my account: there is only one event (one cell in
the partition), which gets probability one, which is the same as its unconditional probability. It also
counts as predictable, because all of the probability measure is concentrated on the one possible
state. I am perfectly happy with accepting this as an obviously degenerate and unimportant case;
recall the discussion of the trivial prediction function above (§4.4). If a fix is nevertheless thought
to be necessary, I would opt simply to require two possible outcomes for random processes; this
doesn’t seem ad hoc, and is explicitly included in the definition of unpredictability.
27
our analysis to give a satisfactory account of the statistical properties of random
phenomena. I regard it as a significant argument in favour of my account that it
can explain this close connection.
However, there are a number of processes for which a strict probabilistic inde-
pendence assumption fails. For example, though over long time scales the weather
is quite unpredictable, from day to day the weather is more stable: a fine day is
more likely to be followed by another fine day. Weather is not best modelled by
a Bernoulli process, but rather by a Markov process, that is, one where the proba-
bility of an outcome on a trial is explicitly dependent on the current state. Indeed,
probably most natural processes are not composed of a sequence of independent
events. Independence of events in a system is likely only to show itself over
timescales where sensitive dependence on initial conditions and simplified dy-
namics have time to compound errors to the point where nothing whatsoever can
be reliably inferred from the present case to some quite distant future event.
32
The
use of ‘random’ to describe those processes which may display some short term
predictability is quite in order, once we recognise the further contextual parameter
of the temporal distance between input state and event (or random variable) to be
predicted, and that for quite reasonable timescales these processes can become un-
predictable. (This also helps us decide not to classify as random those processes
which are unpredictable in the limit as t grows arbitrarily, but which are remark-
ably regular and predictable at the timescales of human experimenters.) That the
commonsense notion of randomness includes such partially unpredictable pro-
cesses is a prima facie reason to take unpredictability, not independence, to be the
fundamental notion—though nothing should obscure the fact that probabilistic in-
dependence is the most significant aspect of unpredictability for our purposes.
33
It is a central presupposition of my view that we can make robust statistical
predictions concerning any process, random or not.
34
One of the hallmarks of ran-
32
Compare the hierarchic of ergodic properties in statistical mechanics, where the increasing
strength of the ergodic, mixing, and Bernoulli conditions serves to shorten the intervals after which
each type of system yields random future events given past events (Sklar, 1993, 235–40).
33
Further evidence for this claim is provided by the fact that probabilistic independence is an
all-or-nothing matter; and taking this as the definition of randomness would have the unfortunate
effect of misclassifying partially unpredictable processes as not random.
34
Is there ever randomness without probabilistic order? Perhaps in Earman’s space invader
case, it is implausible to think that any prior probability for the space invasion is reasonable—not
even a zero prior. The event should be completely unexpected, and should not even be included
in models of the theory. This would correspond to the event in question not even being part
of the partition that the prediction function yields a distribution over. This, as it stands, would
be a counterexample to my analysis, since that analysis requires a probability distribution over
outcomes, and if there is no distribution, the event is trivially not random. I think we can amend
the definition so as to capture this case; add a clause to the definition of predictability requiring
there to be some prediction function which takes the event into consideration.
28
dom processes is that these are the best reliable predictions we can make, since the
expectations of the variables whose values describe the characteristics of the event
are well defined even while the details of the particular outcomes are obscure prior
to their occurrence. This is crucial for the many scientific applications of random-
ness: random selections are unpredictable with respect to the exact composition of
a sample (the event), but the overall distribution of properties over the individuals
in that sample is supposed to be representative of the frequencies in the population
as a whole. In random mating, the details of each mating pair are not predictable,
but the overall rates of mating between parents of like genotype is governed by
the frequency of that genotype in the population.
35
I wish to emphasise again the role of theories. An event is random, just if
it is unpredictable, that is, if the best theoretical representation of that event rel-
ative to a given predictor leaves the probability of that event unchanged when
conditioned on the current state and the laws of the theory. We should give a natu-
ralised account of the best theory relative to a predictor: that theory should be the
one that maximises fit between the epistemic and computational capacities of the
predictors and the demands on those capacities made by the theory, where those
capacities are perfectly objective features of the predictors. An event is random,
then, just in case these objective features of the agents in question render that event
unpredictable.
36
This means, therefore, that while ascriptions of randomness are
sensitive to the requirements of the agents who are using the concept and making
the ascriptions, they are nevertheless objectively determined, by the theories it is
(objectively) appropriate for those agents to utilise. Randomness is thus an extrin-
sic property of events, dependent on properties of agents and the theories they use.
This observation will become important below (§6.3), when discussing whether
randomness as I have defined it is subjective.
35
This illuminates the common ground my proposal shares with Martin-Löf’s statistical testing
view of randomness. If we take the patterns to be provided by some potentially predictive theory,
then failing statistical tests is equivalent to being unpredictable with respect to that theory. For
the theory provides no resources to reject the hypothesis that the only structure governing the se-
quence is pure chance. But a potentially predictive theory will not have infinitely many concurrent
predictions for a single predictor or group of predictors; so no theory can provide the resources
for full Martin-Löf randomness and still remain predictive, except to creatures with computational
abilities quite unlike our own. Nevertheless the spirit of the statistical test proposal remains, yet
relativised to a set of statistical tests that can be actually applied to yield substantive information
about the genesis and behaviour of a random process.
36
Of course, if agents knowtheir epistemic limitations, they may knowof deterministic theories
which can correctly account for the phenomena, but the use of which lies outside their capabilities.
That is just one additional reason why randomness can correctly be assigned even in cases of
perfect determinism.
29
6.1 Cixarrrcxrrox or rnr Drrrxrrrox or Rxxuomxrss
The definition of randomness might be further clarified by close examination of
a particularly nice example that Tim Williamson proposed to me. Williamson’s
example was as follows: let us suppose that I regularly play chess against an
opponent who is far superior to me. Not only does he beat me consistently; he
beats me without my being aware at the time of his strategy and without my being
able to anticipate any but the most obvious of his moves. I cannot predict what
his moves will be. Prima facie, it may appear that my proposal is committed to
classifying his moves as random; if true, that would pose a serious problem for
the view.
Thankfully, there exist at least three lines of response to this example, each of
which illuminates the thesis that randomness is unpredictability. Firstly, note that
unpredictability is theory relative. It is not only the statistical aspects (i.e. actual
frequencies of outcomes) of a phenomenon which dictate how it will be repre-
sented by theory; if I am convinced that my opponent is an agent who reasons and
plans, no theory I will accept will have the consequence that his chess-playing
behaviour is entirely random. Indeed, we will never regard these apparently prob-
abilistic outcomes as indicative of genuine probabilistic independence (since gen-
uine probabilities have a modal aspect not exhausted by the actual statistics). What
we have in this case is not sufficient for randomness because we will never accept
that the goal-directed activities of a rational agent are genuinely unpredictable,
nor are those behaviours really probabilistically independent of preceding states:
I certainly regard my opponent as being in a position to predict his own behaviour,
and to predict it on the basis of the current state of play. Of course, in this situa-
tion, the theories which are directly available to me are not sufficient to enable me
to predict that behaviour.
This leads to consideration of a second point. It is essential to note that judge-
ments of predictability will typically be made by an epistemic or scientific com-
munity, and not a particular individual. It is communities which accept scientific
theories, and the capabilities and expertise of each member of the community
contributes to its predictive powers. This is because the set of available predic-
tion functions in a given theory does not reflect merely personal idiosyncrasies
in understanding the theory, but instead reflects the intersubjective consensus on
the capabilities of that theory. Since the relevant bearer of a predictive ability is
an epistemic community, a phenomenon is judged random with respect to a com-
munity of predictors, not an individual. My chess playing opponent and myself
are presumably members of the same scientific community and the theories we
jointly accept make his chess playing predictable—he knows the theory while I
accept his authority with respect to knowing it and judge his play predictable, even
if not by me. This serves to reinforce the point that the ’availability’ to me of a
30
theory, or of a prediction function, is not a matter of what’s in my head, but rather
of what theories count as normative for my judgements, given the kind of person
I am and the kind of community I inhabit. One could, of course, define a concept
of ‘personal unpredictability’, to capture those uses of the term ‘unpredictable’
that reflect the ignorance and incapacity of a particular individual. But—and this
merely underscores the importance of the communitarian concept—such a per-
sonal unpredictability would have little or no claim to capture the central uses
of the term ‘unpredictable’, nor any further useful application in the analysis of
randomness or other concepts.
A third response also undermines the claim that this chess player’s moves are
unpredictable. For this is exactly the kind of situation where one might be fre-
quently surprised by the moves that are made, but one can in retrospect assimilate
them to an account of the opponent’s strategy. That is, while playing I operated
with a theory which was not sufficient to make accurate predictions concerning
my opponent’s behaviour; in retrospect, and upon due consideration of his play,
I can come to develop my understanding of that play, and hence develop better
accounts of the nature of his chess playing. I can then realise that his behaviour
was not random, though it may have appeared random at that time. Moreover, it
may have been (internally) epistemically acceptable for me at the time to judge
his behaviour as random (setting aside for the time being the preceding two re-
sponses), though in retrospect I can see that I had no robust external warrant for
that judgement.
This last response illustrates a point that may not have been clear from the
foregoing discussion: no mention was made, in the definition or its glosses, of
any temporal conditions on the appropriateness of predictive theories for agents.
That is, randomness is relative to the best theory of some phenomenon, where
which theory counts as best is partially dictated by the cognitive and pragmatic
parameters appropriate for some community of agents. It does not, therefore,
depend on whether those agents are actually in possession of that theory. Obvi-
ously it would be inappropriate to criticise past predictors on the grounds that they
made internally warranted judgements of randomness that were false by the lights
of theories we currently possess. On the other hand, it is true that they would de-
serve censure had it been the case that they were in possession of the best theory
of some phenomenon, and had made judgements of predictability which were at
variance with that theory. That is the sense in which theory-relative judgements of
predictability are supposed to be normative for agents of the kind in question. As
such, it is clear that contingencies of ignorance shouldn’t lead us to count some-
thing as random; it is a kind of (pragmatically/cognitively/theoretically) necessary
lack of predictive power which makes an event random. To turn back to the post
facto analysis of my opponent’s play: while playing I made a (perhaps) warranted
judgement that it was random. But that judgement was at best preliminary and
31
defeasible, for it is clear that it would be in principle possible for me to come
to possess (or to defer to an expert’s possession of) a good predictive theory of
that play, and hence to recognise the sense behind what appeared wrongly to be
random play. By contrast, events that are genuinely random do not contribute in
this way further illumination of the process of which they are outcomes: no after
the fact analysis of a random event will make greater predictive power available
to me or my epistemic brethren. In one sense this is a simple corollary of the
fact that the Bernoulli process is the paradigm random process, and outcomes in
such a process are probabilistically, and hence predictively, independent. But in
another it provides an important illustration of the application of the definition of
randomness—judgements of randomness can be incorrect though warranted, and
outcomes of such a process may well serve as evidence undermining the warrant
for the judgement.
37
6.2 Rxxuomxrss xxu Paoaxarirr.
One may be wondering what kind of interpretation of probability goes into the
definition.
38
Obviously, credences play a central role in attributions of random-
ness, as it is only by way of updating credences that theories yield actual predic-
tions. As such, as long as an agent has credences that could be rationally updated
in accordance with the best theory for the community of which that agent is a
member—that is, whose credence function is suitably deferential to expert cre-
dence functions (van Fraassen, 1989, §§8.4–8.5)—we have the minimum neces-
sary ingredients for potentially correct judgements of randomness. Actual judge-
ments of randomness approach correctness as the actual updating of credences
more closely approximates the updating that would be licensed by possession of
the best theory. However, there is a further question concerning whether there
are other kinds of objective probabilities (‘chances’) which are disclosed by the
37
Williamson’s example does point to a difficulty, however. Consider the hypothesis that our
world is run by an omnipotent and completely rational deity, whose motives and reasons are quite
beyond our ken, and hence our world appears quite capricious and arbitrary. If we accept such a
theory, we must accept both that (i) the events in the world have reason and purpose behind them,
being the outcomes of a process of rational deliberation by a reasonable agent; and (ii) that the
best theory of such events that we might ever possess will classify them as random. This seems to
me a genuine problem (though there is some question about its significance). One way around it
might be to simply add a condition to the definition that, if the event in question is the outcome of
some process of rational deliberation, it cannot be random, no matter how unpredictable it is. This
proposal seems to avoid the problem only by stipulation. I prefer, therefore, to suggest that any
event which can be rationalised (as the act of a recognisably rational agent) will be predictable;
and that therefore if this deity’s actions are genuinely unpredictable, they are not rationalisable,
and I propose cannot be seen as purposive in the way required for the example to have any force.
38
Dorothy Edgington urged me to address this concern.
32
theories in question and count as normative for the credences of the predictor, via
something like the Principal Principle (Lewis, 1980). I hope that the account is
neutral on this important issue, and I hope that, no matter which account (if any)
turns out to be correct, it can simply be slotted into this interpretation of random-
ness.
In fact, the only requirement that my account of randomness makes on an
interpretation of probability is that an account must be given of the content of
probabilistic models in scientific theories. That is, the interpretation must explain
what feature of objective probability allows it to influence credence, and to shape
expectations concerning the way the world will turn out, given that all the agent
does is accept some theory which features probabilistic models.
39
Most naturally
it might be thought that an objective account of probability could meet this de-
mand, but subjectivist accounts must also be able to do so, although perhaps less
easily. Perhaps the only account that the view is not compatible with is von Mises’
original frequency view: since he includes randomness as part of the definition of
probability, on pain of circularity he cannot use this definition of randomness,
which already mentions probability.
Von Mises’ discussion of randomness was motivated by his desire to find firm
grounds for the applicability of probability to empirical phenomena. I completely
agree: random phenomena are frequently characterised by the fact that they can
typically be given robust probabilistic explanations, particularly in terms of the
probabilistic independence of certain events and certain initial data. But even if
the grounds we have for applying probabilistic theories lie in our own cognitive
incapacities, that does not hold for the probabilities postulated by those theories.
Just because predictability is partially epistemic, and hence randomness is par-
tially epistemic, doesn’t mean that the probability governing the distribution of
predictions is epistemic. So our cognitive capacities and pragmatic demands lead
to the suitability of treating phenomena as random, that is, modelling them prob-
abilistically. Our epistemic account of randomness therefore provides a robust
and novel explanation of the applicability of probabilistic theories even in deter-
ministic cases, without having to mount the difficult argument that there are ob-
jective chances in deterministic worlds, and without sacrificing the objectivity of
genuine probability assignments by adopting a wholesale subjectivist approach to
probability. Randomness then has important metaphysical consequences for the
understanding of chance, as well as being internally important to the project of
understanding scientific theories that use the concept. Our epistemic stance man-
dates the use of probabilistic theories; the connection between the probabilities in
39
Such a demand is tantamount to requiring the interpretation of probability be able to answer
what van Fraassen (1989, 81) calls the ‘fundamental question about chance’, which I take to be an
uncontroversial but difficult standard to meet.
33
those theories, and the credences implicit in our epistemic states, is by no means
direct and straightforward.
6.3 Stasrcrrvrr. xxu Coxrrxr-Srxsrrrvrr. or Rxxuomxrss
I have emphasised repeatedly that predictability is in part dependent on the prop-
erties of predictors. What one epistemic agent can predict, another with differ-
ent capacities and different theories may not be able to predict. Laplacean gods
presumably have more powerful predictive abilities than we do; perhaps for such
gods, nothing is random. Or consider a fungus, with quite limited representational
capacities and hence limited predictive abilities. Almost everything is random for
the fungus; it evolved merely to respond to external stimuli, rather than to predict
and anticipate. It may appear, then, that judgements of predictability, and hence
of randomness, must be to some extent subjective and context-sensitive. There is
a worry that this subjectivity may seem counterintuitive. It may also seem quite
worrying that a subjective concept may play a role in successful scientific theoris-
ing. I wish now to defuse these worries.
Firstly, it is a consequence of our remarks in §6.1 that two epistemic agents
cannot reasonably differ merely over whether some process is unpredictable or
random. If they rationally disagree over the predictability of some phenomenon,
they must be members of different epistemic communities, in virtue of adopting
different theories or having different epistemic capacities or pragmatic goals. It
should be quite unexceptional that agents who differ in their broader theoretical
or practical orientation may differ also in their judgements of the predictability of
some particular process.
Secondly, there will be reasonably straightforward empirical tests of the pre-
dictive powers of that predictor who claims the process is not random. This dis-
agreement will then be resolved if one takes these empirical results to indicate
which theory more correctly describes the world, and which therefore deserves to
be adopted as the best predictive theory.
Given these qualifications, it might seem misleading to label the present ac-
count ‘subjective’.
40
For, given values for the parameters of precision of obser-
vations and required accuracy of computations, and given a background theory,
whether a process is predictable or not follows immediately. When we recognise
that these parameter values do not vary freely and without constraint from agent
to agent, but are subject to norms fixed by the communities of which agents are
a part, it seems that rational agents can’t easily disagree over randomness, and
that purely personal and subjective features of those agents do not play a sig-
nificant role in judgements of randomness. It does not seem quite right to call
40
John Burgess made this point.
34
predictability ‘subjective’ simply because agents with opposed epistemic abili-
ties and commitments may reasonably disagree over its applicability. And insofar
as we remain content to classify predictability as subjective, these observations
make it clear that it is a quite unusual form of subjectivity, for randomness and
predictability are clearly not applied on a purely personal basis, arbitrarily and
without rational constraint, and as such are capable of having further scientific
significance.
But in this sense few concepts are truly subjective. Like other folk monadic
properties, randomness can be analysed as a relation with the other terms fixed
contextually. Consider so-called ‘subjective probability’, which can be analysed
either as a subjective property of events, or as grounded in an objective relation
between an event and an agent with certain objective behavioural dispositions.
In the case of predictability and randomness, it is features and capacities of the
predictor that fill in the ‘missing’ places of the relation. Given that, it is a matter
of choice whether we decide to analyse the term as a predicate with subjective
aspects, or as a relation. I would be inclined to claim that randomness is a partially
(inter-)subjective property, simply to make clear where my proposal might fall in
a taxonomy of analyses of randomness, but nothing of significance really turns
on this. In a similar fashion we typically use the standard subjective analysis of
probability, in order to make the semantics go more smoothly and to make the
connection with past uses of the term clearer.
The context sensitivity of randomness is more intriguing. Here the possibility
arises that, in a given context, the relevant theoretical possibilities are delimited
by a theory that in other contexts would be repudiated as inadequate. There are, I
have argued (§4.3), contexts where thermodynamic reasoning is appropriate, even
though that theory is false. In such contexts, therefore, a judgement of randomness
may be appropriate, even though the phenomenon in question might be predictable
using another theory. Perhaps when stated so baldly the context-sensitivity of ran-
domness might seem implausible. However, randomness and predictability are
only context sensitive in virtue of the fact that theory-acceptance is very plausi-
bly context sensitive. As such, no special problem of context sensitivity arises
for randomness that is not shared with other theory-dependent concepts. Fur-
thermore, the natural alternative would be an invariantist account of randomness.
Such an account would not be adequate, primarily because one would have to give
a theory-independent account of randomness, and this would be manifestly inade-
quate to explain how the concept is used in diverse branches of scientific practice
(§1).
For instance, in statistical testing we frequently wish to design random se-
quences so that they pass a selected set of statistical tests. In effect, we wish to use
an effectively predictable phenomenon to produce sequences which mimic natural
unpredictability by being selective about which resources (which statistical tests)
35
we shall allow the predicting agents to use. Dembski (1991) sees a fundamental
split here between a deterministic pseudo-randomness and genuine randomness.
I reject this split: accepting it would involve a significant distortion of the con-
ceptual continuity between randomness in deterministic theories and randomness
in indeterministic theories. We certainly wish to explicitly characterise some or-
dered and regular outcome sequences as those a genuinely random process should
avoid.
But in selectively excluding certain non-random sequences we do not thereby
adopt some new notion of ‘pseudo-randomness’ that applies to the sequences that
remain. Those remaining sequences play precisely the theoretical role that random
sequences play; in particular, they license exactly the same statistical inferences.
Better, then, to recognise that the appropriate theory for that phenomenon, in that
theoretical context, classifies those phenomena as genuinely random. Random-
ness by design, then, is randomness that arises from our adoption of empirically
successful theories; which is to say, randomness simpliciter.
7 Evaluating the Analysis
I think the preceding section has given an intuitively appealing characterisation
of randomness. The best argument for the analysis, however, will be if it is able
to meet the demands we set down at the beginning of §2, and if it is capable of
bearing the weight of the scientific uses to which it will be put.
I maintain that unpredictability is perfectly able to support the explanatory
strategies we examined in §1. In indeterministic situations, phenomena will be
unpredictable at least partly in virtue of that indeterminism. Randomness there-
fore shares in whatever explanatory power that indeterminism demonstrates in
those cases. In deterministic cases, our account of the explanatory success of
randomness must ultimately rest on something else, such as the pragmatic or
epistemic features of agents who accept the probabilistic theories. Note that the
proximate explanation of the explanatory success of randomness, deriving from
unpredictability, remains unified across both the deterministic and indeterministic
situations—a desirable feature of my proposal. Our cognitive capacities are such
that, in many cases, prediction of some phenomenon can only be achieved by ex-
ceedingly subtle and devious means. As such, these phenomena are best treated
as a random input to the system. The fact that these models are borne out em-
pirically vindicates our methodology; for example, we didn’t have to show that
rainfall was genuinely completely ontically undetermined in order to do good sci-
ence about the phenomenon in question. This is similarly the case with random
mating, weather prediction, noise and error correction, and coin tossing. In ran-
dom sampling (and game theory), we merely need to use processes unpredictable
36
by our opponents or by the experimental subjects to get the full benefits of the sta-
tistical inference: if they are forced to treat the process as random, then any skill
they demonstrate in responding to that process must be due to purely intrinsic
features of the trials to which they are responding.
The defining feature of the scientific theories at which we looked in §1 is the
presence of exact and robust statistical information despite ignorance about the
precise constitution of the individual events. Rainfall events have a definite prob-
ability distribution, but precisely when and where it rains is random. If this is the
hallmark of random phenomena, then we can easily see why the particular prob-
abilistic version of unpredictability we used to define randomness is appropriate.
Indeed, in paradigm cases, unpredictability (and hence randomness) involves the
probabilistic independence of some predicted event and the events which consti-
tute the resources for the prediction. In such cases, one can easily see how the
inferential strategies we have identified are legitimate. With respect to random
sampling, it is the probabilistic independence of the choice of test subjects from
the choice of test regimes that allows for the application of significance tests on
experimentally uncovered correlations. In the case of random mating, the fact that
mating partner choice is probabilistically independent of the genetic endowment
of that partner that allows the standard Hardy-Weinberg law to apply. It is relax-
ation of this independence requirement that makes for non-random mating. This
probabilistic aspect of randomness and unpredictability is crucial to understand-
ing the typical form of random processes and their role in understanding objective
probability assignments by theories.
How does our analysis of randomness as unpredictability do on our four de-
mands (§2)?
(1) Statistical Testing Sequences that are unpredictable to an agent can be ef-
fectively produced, since those sequences do not need to have some known
genuine indeterminacy in their means of production in order to ground the
statistical inferences we make using them. Subjecting the process to a fi-
nite battery of statistical tests designed to weed out sequences that are pre-
dictable by standard human subjects is, while difficult, nevertheless possi-
ble. Correlations between the test subjects and the random sequence can
still occur by chance, but since there can be no a priori guarantee that could
ever rule out accidental correlations even in the case of genuinely indeter-
ministic sequences, no account of randomness should wish to eliminate the
possibility. We should only rule out predictable sources of correlation other
than the one we wish to investigate.
(2) Finite Randomness Single events, as well as finite processes, can be un-
predictable.
37
(3) Explanation and Confirmation A probabilistic theory which classifies
some process as random is, as a whole, amenable to incremental confirma-
tion (Howson and Urbach, 1993, ch. 14). Moreover a particular statistical
hypothesis which states that the process has a unpredictable character can
also be incrementally confirmed or disconfirmed, as the evidence is more
or less characteristic of an unpredictable process. A special case is when
the phenomenon is predicted better than chance; this would be strongly dis-
confirmatory of randomness. When confirmed, there seems no reason why
such theories or hypotheses cannot also possess whatever features make for
good explanations; they can surely form part of excellent statistical expla-
nations for why the processes exhibit the character they do. We have gone
to some lengths above to show that unpredictability can fill in quite ade-
quately for randomness in typical uses; there seems no reason why it could
not effectively substitute in explanations as well.
(4) Determinism Unpredictability occurs for many reasons independent of
indeterminism, and is compatible with determinism. Thus we can still have
random sequences in deterministic situations, and as part of theories that
supervene on deterministic theories. The key to explaining why random-
ness and indeterminism seem closely linked is that the theories themselves
should not be deterministic, even if they are acceptable accounts of ontically
deterministic situations.
Analysing randomness as unpredictability, I maintain, gives us the features
that the intuitive concept demands, without sacrificing its scientific applicability.
It certainly does better than its rivals; even without them, it captures enough of our
intuitions to truly deserve the name. The final, and most demanding test would be
to see how the account works in particular cases: how, for example, to cash out
the hypothesis that the mate of a female Cambrian trilobite was chosen at random
from among the male members of her population.
41
In outline, my proposal is
that a correct account of trilobite mating would show that there is no way for
us to predict (retrodict) better than chance which mate would be (was) chosen,
when knowledge concerning the male individuals is restricted to their heritable
properties (which of course are the significant ones in a genetic context). This
entails that there is no probabilistic dependence between possession of a certain
phenotype and success in mating. (Of course, given extra knowledge, such as
the knowledge concerning which male actually did succeed in mating with this
individual, or given facts about location or opportunity, we can predict better than
chance which male would be successful; these properties are not genetic, and do
not conflict with the assumption of random mating.) This account of how to apply
41
I owe the question, and the illustrative example, to John Burgess.
38
the theory must remain a sketch, but I hope it is clear how the proposal might
apply to other cases.
Acknowledgements
Thanks to audiences at Oxford, Leeds, the 2002 AAP in Christchurch, and Prince-
ton. Particular thanks to Paul Benacerraf, John Burgess, Adam Elga, Alan Hájek,
Hans Halvorson, Lizzie Maughan and Bas van Fraassen for comments on earlier
written versions.
1 January, 2005.
Bibliography
Albert, David Z. (1992), Quantum Mechanics and Experience. Cambridge, MA:
Harvard University Press.
Batterman, R. and White, H. (1996), “Chaos and Algorithmic Complexity”. Foun-
dations of Physics, vol. 26: pp. 307–36.
Bell, J. S. (1987a), “Are there Quantum Jumps?” In Bell (1987c), pp. 201–12.
———— (1987b), “On the Impossible Pilot Wave”. In Bell (1987c), pp. 159–68.
———— (1987c), Speakable and Unspeakable in Quantum Mechanics. Cam-
bridge: Cambridge University Press.
Bishop, Robert C. (2003), “On Separating Predictability and Determinism”.
Erkenntnis, vol. 58: pp. 169–88.
Chaitin, Gregory (1975), “Randomness and Mathematical Proof”. Scientific
American, vol. 232: pp. 47–52.
Church, Alonzo (1940), “On the Concept of a Random Sequence”. Bulletin of the
American Mathematical Society, vol. 46: pp. 130–135.
Dembski, William A. (1991), “Randomness By Design”. Noûs, vol. 25: pp. 75–
106.
Earman, John (1986), A Primer on Determinism. Dordrecht: D. Reidel.
Fisher, R. A. (1935), The Design of Experiments. London: Oliver and Boyd.
39
Frigg, Roman (2004), “In What Sense is the Kolmogorov-Sinai Entropy a Mea-
sure for Chaotic Behaviour?—Bridging the Gap Between Dynamical Systems
Theory and Communication Theory”. British Journal for the Philosophy of Sci-
ence, vol. 55: pp. 411–34.
Ghirardi, G. C., Rimini, A. and Weber, T. (1986), “Unified Dynamics for Micro-
scopic and Macroscopic Systems”. Physical Review D, vol. 34: p. 470.
Griffiths, Thomas L. and Tenenbaum, Joshua B. (2001), “Randomness and Coin-
cidences: Reconciling Intuition and Probability Theory”. In Proceedings of the
Twenty-Third Annual Conference of the Cognitive Science Society.
Hartl, Daniel L. (2000), A Primer of Population Genetics. Cumberland, MA: Sin-
auer, 3 ed.
Hellman, Geoffrey (1978), “Randomness and Reality”. In Peter D. Asquith and
Ian Hacking (eds.), PSA 1978, vol. 2, pp. 79–97, Chicago: University of
Chicago Press.
Howson, Colin (2000), Hume’s Problem: Induction and the Justification of Belief .
Oxford: Oxford University Press.
Howson, Colin and Urbach, Peter (1993), Scientific Reasoning: the Bayesian Ap-
proach. Chicago: Open Court, 2 ed.
Hughes, R. I. G. (1989), The Structure and Interpretation of Quantum Mechanics.
Cambridge, MA: Harvard University Press.
Humphreys, Paul W. (1978), “Is “Physical Randomness” Just Indeterminism in
Disguise?” In Peter D. Asquith and Ian Hacking (eds.), PSA 1978, vol. 2, pp.
98–113, Chicago: University of Chicago Press.
Jeffrey, Richard C. (2004), Subjective Probability (The Real Thing). Cambridge:
Cambridge University Press.
Kolmogorov, A. N. (1963), “On Tables of Random Numbers”. Sankhya, vol. 25:
pp. 369–376.
Kolmogorov, A. N. and Uspensky, V. A. (1988), “Algorithms and Randomness”.
SIAM Theory of Probability and Applications, vol. 32: pp. 389–412.
Kyburg, Jr., Henry E. (1974), The Logical Foundations of Statistical Inference.
Dordrecht: D. Reidel.
40
Laio, F., Porporato, A., Ridolfi, L. and Rodriguez-Iturbe, Ignacio (2001), “Plants
in Water-Controlled Ecosystems: Active Role in Hydrological Processes and
Response to Water Stress. II. Probabilistic Soil Moisture Dynamics”. Advances
in Water Resources, vol. 24: pp. 707–23.
Laplace, Pierre-Simon (1951), Philosophical Essay on Probabilities. New York:
Dover.
Lewis, David (1980), “A Subjectivist’s Guide to Objective Chance”. In Lewis
(1986), pp. 83–132.
———— (1984), “Putnam’s Paradox”. Australasian Journal of Philosophy, vol. 62:
pp. 221–36.
———— (1986), Philosophical Papers, vol. 2. Oxford: Oxford University Press.
Li, Ming and Vitanyi, Paul M. B. (1997), An Introduction to Kolmogorov Com-
plexity and Its Applications. Berlin and New York: Springer Verlag, 2 ed.
Martin-Löf, Per (1966), “The Definition of a Random Sequence”. Information
and Control, vol. 9: pp. 602–619.
———— (1969), “The Literature on von Mises’ Kollektivs Revisited”. Theoria,
vol. 35: pp. 12–37.
———— (1970), “On the Notion of Randomness”. In A. Kino (ed.), Intuitionism
and Proof Theory, Amsterdam: North-Holland.
Mayo, Deborah (1996), Error and the Growth of Experimental Knowledge.
Chicago: University of Chicago Press.
Montague, Richard (1974), “Deterministic Theories”. In Richmond H. Thomason
(ed.), Formal Philosophy, New Haven: Yale University Press.
Popper, Karl (1982), The Open Universe. Totowa, NJ: Rowman and Littlefield.
Rodriguez-Iturbe, Ignacio (2000), “Ecohydrology”. Water Resources Research,
vol. 36: pp. 3–10.
Rodriguez-Iturbe, Ignacio, Porporato, A., Ridolfi, L., Isham, V. and Cox, D. R.
(1999), “Probabilistic Modelling of Water Balance at a Point: the Role of
Climate, Soil and Vegetation”. Proceedings of the Royal Society of London,
vol. 455: pp. 3789–3805.
41
Schurz, Gerhard (1995), “Kinds of Unpredictability in Deterministic Systems”. In
P. Weingartner and G. Schurz (eds.), Law and Prediction in the Light of Chaos
Research, pp. 123–41, Berlin: Springer.
Shannon, Claude E. and Weaver, William (1949), Mathematical Theory of Com-
munication. Urbana, IL: University of Illinois Press.
Sklar, Lawrence (1993), Physics and Chance. Cambridge: Cambridge University
Press.
Skyrms, Brian (1996), Evolution of the Social Contract. Cambridge: Cambridge
University Press.
Smith, Peter (1998), Explaining Chaos. Cambridge: Cambridge University Press.
Stalnaker, Robert C. (1984), Inquiry. Cambridge, MA: MIT Press.
Stone, M. A. (1989), “Chaos, Prediction and Laplacean Determinism”. American
Philosophical Quarterly, vol. 26: pp. 123–31.
Strevens, Michael (2003), Bigger than Chaos: Understanding Complexity through
Probability. Cambridge, MA: Harvard University Press.
Suppes, Patrick (1984), Probabilistic Metaphysics. Oxford: Blackwell.
van Fraassen, Bas C. (1980), The Scientific Image. Oxford: Oxford University
Press.
———— (1989), Laws and Symmetry. Oxford: Oxford University Press.
van Lambalgen, Michiel (1987), “Von Mises’ Definition of Random Sequences
Revisited”. Journal of Symbolic Logic, vol. 52: pp. 725–55.
———— (1995), “Randomness and Infinity”. Tech. Rep. ML-1995-01, ILLC, Uni-
versity of Amsterdam, URL www.illc.uva.nl/Publications/ResearchReports/
ML-1995-01.text.ps.gz.
Ville, J. (1939), Étude Critique de la Notion Collectif . Paris: Gauthier-Villars.
von Mises, Richard (1957), Probability, Statistics and Truth. New York: Dover.
Wigner, Eugene P. (1961), “Remarks on the Mind-Body Question”. In I. J. Good
(ed.), The Scientist Speculates, pp. 284–302, New York: Basic Books.
42

of scientific theories and methodologies, the consequences of this mistaken view are potentially quite serious. After I briefly outline the scientific roles of randomness, I will survey the false views that currently monopolise philosophical thinking about randomness. I then make my own positive proposal, not merely as a contribution to the correct understanding of the concept, but also hopefully prompting a renewal of philosophical attention to randomness. The view I defend, that randomness is unpredictability, is not entirely without precedent in the philosophical literature. As can be seen from the epigraphs I quoted at the beginning of this paper, the connection between the two concepts has made an appearance before.1 These quotations are no more than suggestive however: these authors were aware that there is some kind of intuitive link, but made no efforts to give any rigorous development of either concept in order that we might see how and why randomness and prediction are so closely related. Indeed, the Suppes quotation is quite misleading: he adopts exactly the pernicious hypothesis I discuss below (§3.2), and takes determinism to characterise predictability—so that what he means by his apparently friendly quotation is exactly the mistaken view I oppose! Correspondingly, the third objective I have in this paper is to give a plausible and defensible characterisation of the concept of predictability, in order that we might give philosophical substance and content to this intuition that randomness and predictability have something or other to do with one another.2 1 Randomness in Science

The concept of randomness occurs in a number of different scientific contexts. If we are to have any hope of giving a philosophical concept of randomness that is adequate to the scientific uses, we must pay some attention to the varied guises in which randomness comes. All of the following examples are in some sense derivative from the most central and crucial appearance of randomness in science—randomness as a prerequisite for the applicability of probabilistic theories. Von Mises was well aware of the centrality of this role; he made randomness part of his definition of probability. This association of randomness with von Mises’ hypothetical frequentism has unfortunately meant that interest in randomness has declined with the fortunes of that interpretation of probability. As I mentioned, this decline was hastened by the widespread belief that randomness can be explained merely as indeterminism. Both of these factors have lead to the untimely neglect of randomness as a cenAnother example is more recent: ‘we say that an event is random if there is no way to predict its occurrence with certainty’ (Frigg, 2004, 430). 2 Thanks to Steven French for emphasising the importance of these motivating remarks.
1

2

trally important concept for understanding a number of issues, among them being the ontological force of probabilistic theories, the criteria and grounds for acceptance of theories, and how we might evaluate the strength of various proposals concerning statistical inference. Especially when one considers the manifest inadequacies of ontic accounts of randomness when dealing with these issues (§2), the neglect of the concept of randomness seems to have left a significant gap in the foundations of probability. We should, however, be wary of associating worries about randomness too closely with issues in the foundations of probability—those are only one aspect of the varied scientifically important uses of the concept. By paying attention to the use of the concept, hopefully we can begin to construct an adequate account that genuinely plays the role required by science. 1.1 R S

Many dynamical processes are modelled probabilistically. These are processes which are modelled by probabilistic state transitions.3 Paradigm examples include the way that present and future states of the weather are related, state transitions in thermodynamics and between macroscopic partitions of classical statistical mechanical systems, and many kinds of probabilistic modelling. Examples from ‘chaos theory’ have been particularly prominent recently (Smith, 1998). For example, in ecohydrology (Rodriguez-Iturbe, 2000), the key concept is the soil water balance at a point within the rooting depth of local plants. The differential equations governing the dynamics of this water balance relate the rates of rainfall, infiltration (depending on soil porosity and past soil moisture content), evapotranspiration and leakage (Laio et al., 2001, Rodriguez-Iturbe et al., 1999). The occurrence and amount of rainfall are random inputs.4 The details are interesting, but for our purposes the point to remember is that the randomness of the rainfall input is important in explaining the robust structure of the dynamics of soil moisture. Particular predictions of particular soil moisture based on particular volumes of rainfall are not nearly as important for this project as understanding the responses of soil types to a wide range of rainfall regimes. The robust probabilistic structures that emerge from low-level random phenomena are crucial to the task of explaining and predicting how such systems evolve over time and what consequences their structure has for the systems that depend on soil moisture, for example, plant communities.5
This is unlike the random mating example (§1.2), where we have deterministic transitions between probabilistically characterised states. 4 These are modelled by a Poisson distribution over times between rainfall events, and an exponential probability density function over volumes of rainfall. 5 Strevens (2003) is a wonderful survey of the way that probabilistic order can emerge out of the complexity of microscopic systems.
3

3

Similar dynamical models of other aspects of the natural world, including convection currents in the atmosphere, the movement of leaves in the wind, and the complexities of human behaviour, are also successfully modelled as processes driven by random inputs. But the simplest examples are humble gaming devices like coins and dice. Such processes are random if anything is: the sequence of outcomes of heads and tails of a tossed coin exhibits disorder, and our best models of the behaviour of such phenomena are very simple probabilistic models. At the other extreme is the appearance of randomness in the outcomes of systems of our most fundamental physics: quantum mechanical systems. Almost all of the interpretations of quantum mechanics must confront the randomness of experimental outcomes with respect to macroscopic variables of interest; many account for such randomness by positing a fundamental random process. For instance, collapse theories propose a fundamental stochastic collapse of the wave function onto a particular determinate measurement state, whether mysteriously induced by an observer (Wigner, 1961), or as part of a global indeterministic dynamics (Bell, 1987a, Ghirardi et al., 1986). Even no-collapse theories have to claim that the random outcomes are not reducible to hidden variables.6 1.2 R B

The most basic model that population genetics provides for calculating the distribution of genetic traits in an offspring generation from the distribution of such traits in the parent generation is the Hardy-Weinberg Law (Hartl, 2000, 26–9).7 This law idealises many aspects of reproduction; one crucial assumption is that mating between members of the parent generation is random. That is, whether mating occurs between arbitrarily selected members of the parent population does not depend on the presence or absence of the genetic traits in question in those members. Human mating, of course, is not random with respect to many genetic traits: the presence of particular height or skin colour, for example, does influence whether two human individuals will mate. But even in humans mating is random with respect to some traits: blood group, for example. In some organisms, for instance some corals and fish, spawning is genuinely random: the parent population gathers in one location and each individual simply ejects their sperm or eggs into the ocean where they are left to collide and fertilise; which two individuals end up mating is a product of the random mixing of the ocean currents. The randomness of mating is a pre-requisite for the application of the simple dynamics; there is no explicit presence of a random state transition, but such behaviour is presupposed
For details, see Albert (1992), Hughes (1989). The law relates genotype distribution in the offspring generation to allele distribution in the parents.
7 6

4

The key assumption that classical statistics makes in these cases is that the sample is random (Fisher. See also Howson (2000). a randomising mixed strategy dominates any pure strategy (Suppes. 1996. the idea is that the players in the game can look to an external source. 1. One plausible idea is that if Bayesians have priors that rule out bizarre sources of correlation. and randomising rules out more homely sources of correlation. Another application of the concept of randomness is to agents involved in the evolution of conventions (Skyrms. the epistemic aspects of randomness are most important for its role in scientific explanations. 1984. 260–74. pp. Another nice example of random behaviour occurs in game theory.8 The idea here. The question whether Bayesian statistics should also use randomisation is addressed by Howson and Urbach (1993). in the convention of stopping at lights which have two colours but no guidance as to the intended meaning of each. This is obviously important in cases where the statistical properties of the whole population are of most interest . For example. As we have seen. 1935). as in many others. for instance in clinical or agricultural trials. is that we should expect no correlation between the properties whose distribution the test is designed to uncover and the properties that decide whether or not a particular individual should be tested. for some populations.in the application of the theory. where the patients or fields selected should be independent of the treatment given and representative of the population from which they came with respect to treatment efficacy. the Hardy-Weinberg principle is explanatory of the dynamics of genetic traits in a population. In many games where players have only incomplete information about each other. In this case.3 R S In many statistical contexts. 75–6). Complicating the law by making its assumptions more realistic only serves to indicate how various unusual features of actual population dynamics can be deftly explained as a disturbance of the basic underlying dynamics encoded in the law. perceived as random. 48–51 and Mayo (1996). the assumptions are not even idealised. but nevertheless the overall distribution of mating is determined by the statistical features of the population as a whole. 210–2). Despite its many idealisations. and take that as providing a way of breaking a symmetry and escaping a non-optimal mixed equilibrium in favour of what Aumann calls a correlated equilibrium. Each mating event is random. or in the reading of certain kinds of external indicators in a game of chicken (hawk-dove). experimenters have to select a representative sample of a population. 8 5 . then the posterior after the experiment has run is reliable. again. pp. It is also important when constructing other experiments.

Prior to t that particle had not been present in the universe. The experimenters want a certain kind of patternlessness in the ordering of the cups. namely. it is inappropriate to think that the only relevant factors are the information transmitted and the encoding of that information (Shannon and Weaver. we present her with eight cups of tea. the randomising should only rule out those cases where it is not. hence the prior state does not determine the future state. 1. Because Newtonian physics imposes no upper bound on the velocity of a point particle. he must randomise the cup order. A. If her internal algorithm is actually correlated with the presence of milk-first. The outcome of a trial of this experiment is a judgement by the woman of which cups of tea had milk. More earthly examples are not hard to find. and they know the recipe by which they computed in which order the cups should come. stray interference with an electrical or radio signal). That is. this order would not be random for the experimenters: they know which cup comes next. 33–39). Shannon noted that when modelling signal transmission systems. We wish to test her discriminatory powers. Of course. it is nomologically possible in Newtonian mechanics to have a particle whose velocity is finite but unbounded. I think. exactly four of which had the milk added first. The experimenter must strive to avoid correlation between the order in which the cups are presented.  N John Earman has argued that classical Newtonian mechanics is indeterministic. 1986. we suppose a woman claims to be able to taste whether milk was added to the empty cup or to the tea. The experimenters also wish themselves to know in what order the cups are coming. and whatever internal algorithm the woman uses to decide which cups to classify as milk-first. but rather serves to mark our recognition of complete capriciousness in the event. especially since they serve a purely negative role and cannot be 6 . There are physical factors that can corrupt the physical representation of that data (say. which appears at spatial infinity at some time t (this is the temporal inverse of an unboundedly accelerating particle that limits to an infinite velocity in a finite time). It is not appropriate or feasible to explicitly incorporate such disturbances in the model. those cases where she is faking it. such a space invader is completely unexpected—it is plausible. a kind of disorder that is designed to disturb accidental correlations (Dembski. 1949). 1991). on the basis of a very special kind of case (Earman. Randomness in this case does not capture some potentially explanatory aspect of some process or phenomenon. but not to the experimenters.4 C. since such a ‘space invader’ particle is possible. Intuitively. An important feature of this case is that it is important that the cup selection be random to the woman. the experiment would be uninterpretable without such knowledge.In Fisher’s famous thought experiment. to regard such an occurrence as completely and utterly random and arbitrary.

The randomness of the noise is crucial: if it were not random. if they work. It must at least apply to finite phenomena. As it stands. and ascertain whether a given sequence is random. These form a rough picture of the intuitions that scientists marshal when describing a phenomenon as random. the alterations in DNA produced by imperfect copying are deterministic. (1) Statistical Testing We need a concept of randomness adequate for use in random sampling and randomised experiments. and it is a very general technique for simulating the pattern of disturbances even in deterministic systems with no other probabilistic aspect. these mutations are random with respect to the genes they alter. All the models we have mentioned include noise as a confounding factor. it could be explicitly addressed and controlled for. we must begin with the scientifically acceptable uses of the concept.9 2 Concepts of Randomness If we are to understand what randomness is. It may be that. In particular. 7 . and hence the differential fitness they convey. rely crucially on the random and unsystematic distribution of errors. Consider some of the competing demands on an analysis of randomness that may be prompted by our examples. A further example is provided by the concept of random mutation in classical evolutionary theory. as in Earman’s example or a single instance of random mating. Nevertheless. These models therefore include a random noise factor: random alterations of the signal with a certain probability distribution. (2) Finite Randomness The concept of randomness must apply to the single event. our task is to systematise these intuitions as best we can into a rigorous philosophical analysis of this intuitive conception. indicating why certain systems exhibit the kinds of behaviour they do. which. we need to be able to produce random sequences on demand. only accommodated. 9 Thanks to Spencer Maughan for the example. the hypothesis that a system is random must be amenable to incremental empirical confirmation or disconfirmation. from a biochemical perspective. to this end.controlled for. (3) Explanation and Confirmation Attributions of randomness must be able to be explanatorily effective. noise in signalling systems is addressed by complex error-checking protocols.

25) The intuition is that. 324). 2. If we could pick out a biased subsequence. Even if one recognises that these demands are merely suggested by the examples and may not survive careful scrutiny. else we cannot explain the use of randomness to describe processes in population genetics or chaotic dynamics. the relative frequency of Ai remains the same as in the larger sequence: relfS = relfS . . An . 1957. This we shall now do with the two most prominent past attempts to define randomness: the place selection/statistical test conception and the complexity conception of randomness. this temptation may grow stronger when one considers how previous explications of randomness deal with the cases we described above. and (ii) for every infinite subsequence S chosen i by an admissible place selection. for example).1 V M/C/M-L¨  R  D 1 ( M-R). is vM-random iff (i) every outcome type Ai has a well-defined relative frequency relfS in S . one may be tempted to despair: “Indeed. . Confronted with this variety of uses of randomness to describe such varied phenomena. i.” (Howson and Urbach. . An infinite sequence S of outcomes of types A1 . we should reject them. . 1993. Church (1940). and each of those has the same limit relative frequencies of outcomes.e. if we pick out subsequences independently of the contents of the elements we pick (by paying attention only to their indices. that would indicate that some set of indices had a greater than chance probability of being occupied by some particular outcome. it seems highly doubtful that there is anything like a unique notion of randomness there to be explicated. and so fails condition (2) of finiteness. the intuition is that such an occurrence would not be consistent with randomness. attempting to make von Mises’ remarks precise. What is an admissible place selection? Von Mises himself says: [T]he question whether or not a certain member of the original sequence belongs to the selected partial sequence should be settled independently of the result of the observation. proposed that admissible place selections are recursive functions that decide whether an element 8 .(4) Determinism The existence of random processes must be compatible with determinism. Both do poorly in meeting our criteria. before anything is known about the result. (von Mises. then the sequence is random. poorly enough that if a better account were to be proposed. i i Immediately. the definition only applies to infinite sequences.

An immediate corollary is that no random sequence can be recursively computable: else there would be a recursive function that would choose all and only 1s from the initial sequence. Furthermore. 730–1. and no finite sequence can be vM-random at all. That our evidence is at best finite means that the claim that an actual sequence is vM-random is empirically underdetermined. That is. 1987. One possible exception is in those cases where we have a rigorous demonstration that the behaviour in question cannot be generated by a deterministic system—in that case. the function which generates the sequence itself. and ‘select any element that comes after the subsequence 010’ are both admissible place selections. any initial segment of this sequence is not random with respect to the whole sequence or any infinite selected subsequence. There seems to be no empirical constraint that could lead us to postulate that such a sequence is genuinely vM-random. we cannot produce random sequences for use in statistical testing. note that in this case we have made essential 9 . the frequency of 1s is greater than or equal to one half (van Lambalgen. vM-randomness is a profligate hypothesis that we cannot be justified in adopting. the system may be genuinely vM-random. for some given sequence. that it is produced by some pseudo-random function). at some finite time the checking machine will halt with a ‘no’ answer). Indeed. it is perfectly possible that some genuinely vM-random infinite sequence has an arbitrarily biased initial segment. This point may be made more striking by noting that actual statistical testing only ever involves finite sequences.745–8). For example. no finite segment will ever provide enough evidence to justify claiming that the actual sequence of outcomes of which it is a part is random. But if a random sequence cannot be effectively generated. there is some infinite sequence S such that the limit frequency of 1s in any subsequence of S = ϕ j (S ) selected by some place selection is one half. A theorem of Ville (1939) establishes a stronger result: given any countable set of place selections {ϕi }. even to the point where all the outcomes of the sequence that actually occur during the life of the universe are 1s. Neither can we effectively test.g. they are useless for statistical testing purposes. since any finite sequence is recursively computable. Even granting the existence of such demonstrations. despite the fact that for every finite initial segment of the sequence. failing the first demand. For such a test would involve exhaustively checking all recursive place selections to see whether they produce a deviant subsequence. If random sequences are neither producible or discernible. whether it is random. where more empirically tractable hypotheses will do far better. Hence it can play no role in explanations of random phenomena in finite cases. ‘select only the odd numbered elements’. and deserving of a arbitrarily low credence because any finite sequence is better explained by some other hypothesis (e.si is to be included in a subsequence on input of the index number i and the initial segment of the sequence up to si−1 . namely. and this is not decidable in any finite time (though for any non-random sequence.

vM-random sequences also—almost all strictly increasing sets of integers (Wald place selections) select infinite subsequences of a random sequence that preserve relative frequencies. 75) This is the idea that in constructing statistical tests and random number generators. A random sequence is then one that. Random sequences may well exist for infinite strings of quantum mechanical measurement outcomes.” (p. Partly in response to these kinds of worries. properly to be randomness. since they must leave up to chance exactly which entities come to constitute the random selection. His idea is that biased sequences are possible but unlikely: non-random sequences. “Randomness.10 Van Lambalgen (1987) shows that Martin-Löf (ML)-random sequences are. we would not reject the hypothesis. Martin-Löf’s idea is that truly random sequences satisfy all the probability one properties of a certain canonical kind: recursive sequential significance tests—this means (roughly) that a sequence is random with respect to some hypothesis H p about probability p of some outcome in that sequence if it does not provide grounds for rejecting H p at arbitrarily small levels of significance. 1970). random rainfall inputs are complex to describe. including the types of sequences considered in Ville’s theorem. The probability arising out of the test is the probability that chance alone could account for the divergence between the observed results and the hypothesis. 10 10 . a final modification of Von Mises’ suggestion was made by Martin-Löf (1966. with probability 1. the probability that the divergence between the observed sequence and the probability hypothesis (the infinite sequence) is not an indication that the classification is incorrect. for the purposes of statistical testing. Here we have simply abandoned the quest to explain deterministic randomness. Finally.appeal to a fact about the random process that produces the sequence. take the statistical tests you don’t want your sequence to fail. In this case. namely. 2. The Consider some statistical test such as the χ2 test. the first thing to be considered is the kinds of patterns that one wants the random object to avoid instantiating. 1969. even given an arbitrarily small probability that chance accounts for the divergence. but we don’t think that random phenomena are confined to indeterministic phenomena alone: vM-randomness fails demand (4). Then one considers the kinds of objects that can be constructed to avoid matching these tests.2 KCS- One aspect of random sequences we have tangentially touched on is that random sequences are intuitively complex and disordered. Arbitrary segments of ML-random sequences cannot satisfy this requirement. and make sure that the sequence is random with respect to these patterns. as Dembski (1991) points out. form a set of measure zero in the set of all infinite binary sequences. must leave nothing to chance. and we have strictly gone beyond the content of the evidence sequence in making that appeal. Random mating is disorderly at the level of individuals.

The mere fact that we can give results about the robustness of complexity results (namely. pp. 84). we get a relatively machineindependent characterisation of complexity. where it is not the space occupied by the programme. Earman (1986. van Lambalgen (1995). Such machines are known as asymptotically optimal machines. (1) ∀T ∃cT KU (S ) KT (S ) + cT .other main historical candidate for an analysis of randomness. some Turing machines are able to more compactly encode some sequences. begins with the idea that randomness is the (algorithmic) complexity of a sequence. Smith. ch. Smith (1998. Kolmogorov (1963). ch. Kolmogorov showed that there exist universal Turing machines U such that for any sequence S . D 3 (KCS-R). 9). Chaitin (1975). where the constant cT doesn’t depend on the length of the sequence. D 2 (C). a sequence is time-random just when all polynomial time algorithms fail to distinguish the putative random string from a real random string (equivalent because a natural way of distinguishing random from pseudo-random is by computing the function) (Dembski. suggested by the work of Kolmogorov. Sequences are time-random just in case the time taken to compute the algorithm and output the sequence is greater than polynomial in the size of the input. 11 11 . Batterman and White (1996). Equivalently. The complexity KT (S ) of sequence S is the length of the shortest programme C of some Turing machine T which produces S as output. but rather the time it takes to compute its output given its input. 1991. A sequence S is KCS-random if its complexity is approximately its length: K(S ) ≈ l(S ). See also Kolmogorov and Uspensky (1988).11 The complexity of a sequence is defined in terms of effective production of that sequence. and hence can be made arbitrarily small as the length of the sequence increases. VIII).13 One natural way to apply this definition to physical processes is to regard the sequence to be generated as a string of successive outcomes of some such process. that lots of universal machines will give roughly the same complexity value to any given sequence) doesn’t really get around the problem that any particular machine may well be biased with respect to some particular sequence (Hellman. (KT (S ) is set to ∞ is there does not exist a C that produces S ). A comprehensive survey of complexity and the complexity-based approach to randomness is Li and Vitanyi (1997).12 The upper bound on complexity of a sequence of length l is approximately l—we can always resort to hard-coding the sequence plus an instruction to print it. 13 A related approach is the so-called ‘time-complexity’ view of randomness. 1978. Chaitin and Solomonov (KCS). 1998). when given as input the length of S . 25–33). This definition is machine-dependent. 12 Though problems remain. Suppes (1984. If we let the complexity of a sequence be defined relative to such a machine.

I think. But the fraction 2l−k /2l = 1/2k . esp. It should. See Frigg (2004. Firstly. For each l. by the same reasoning. Even for very modest compression in large sequences (say. and so the single events they represent are not KCS-random either. for some k. A sequence of tosses of a biased coin (e. which has very recently been given an important role in detecting randomness in chaotic systems.In dynamical systems. longer sequences are more KCS-random. there are 2l binary sequences of length l. the biased sequence will be more compressible. 143–5).14 This definition fares markedly better with respect to some of our demands than vM-randomness. This means that single element sequences are not KCS-random. nor any way of effectively producing a KCSrandom sequence. 1995. trouble us that. hence there will be at most 2l−k programmes which generate non-KCS-random sequences. nor does there exist an algorithm which on input S yields output 1 iff that sequence is KCS-random (van Lambalgen. but I consider these far less worrisome in application (Smith.16 This prevents KCS-random sequences being effectively useFor instance. random mating would not be less random were the distribution over genotypes non-uniform. Pr(H) > 0.5) can be expected to have more frequent runs of consecutive 1s than an unbiased sequence. But in each of these cases. Brudno’s theorem establishes a connection between KCS-randomness and what is known as Kolmogorov-Sinai entropy. This is important because stochastic processes occur with arbitrary underlying probability distributions. this would naturally be generated by examining trajectories in the system: sequences that list the successive cells (of some partition of the state space) that are traversed by a system over time. and randomness needs to apply to all of them: intuitively. again. k = 20. there are no effective computational tests for KCS-randomness. KCS-randomness is thus primarily a property of trajectories. 156–7). What about statistical testing? Here. l = 1000) less than 1 in a million sequences will be non-KCS-random.15 It should also disturb us that biased sequences are less KCS-random than unbiased sequences (Earman.g. A single 1 interrupting a long sequence of 0s is even less KCS-random. to the point of satisfying all the statistical significance tests for their probability value. the distribution of 1s in the sequence can be as random as desired. so the proportion of non-KCS-random sequences within all sequences of length l (for all l) decreases exponentially with the degree of compressibility demanded. 16 There does not exist an algorithm which on input k yields a KCS-random sequence S as output such that |S | = k. 1998. 1986. 15 There are also difficulties in extending the notion to infinite sequences. there are finite sequences that are classified as KCSrandom. But the non-KCSrandom sequences amongst them are all generated by programmes of less than length l − k. This result is a fairly immediate 14 12 . intuitively. 10–1). This notion turns out to be able to be connected with a number of other mathematical concepts that measure some aspects of randomness in the context of dynamical systems. 430).

on the surface. it actually is an analysis of disorder in a sequence. 13 . not product randomness (Earman. 137–8). 1986. one may be drawn to the conclusion that no concept perfectly fills the role delineated by our four demands. My hypothesis is that scientific randomness is best analysed as a certain kind of unpredictability. Our true concern. the lack of an effective test renders the hypothesis of KCS-randomness of some sequence relatively immune to confirmation or disconfirmation. Be that as it may. Adopting a best candidate theory of content (Lewis. make any claims about scientific randomness. as Earman (1986) suggests. 3. therefore. to account for random behaviour.ful in random sampling and randomisation. (Though random phenomena typically exhibit disorderly behaviour. some preliminaries need to be addressed. perhaps. Our survey showed that the inference from product to process randomness failed: the class of processes that corollary of the unsolvability of the halting problem for Turing machines. and one may then settles on (for example) KCS-randomness as the best partial filler of the randomness role. the problems above seem to disqualify KCSrandomness from being a good analysis of randomness.1 P  P R The mathematical accounts of randomness we addressed do not. Rather. Our demands were all constraints on random processes. 1984). from the randomness of some sequence. irrespective of the provenance of that sequence.) 3 Randomness is Unpredictability: Preliminaries Perhaps the foregoing survey of mathematical concepts of randomness has convinced you that no rigorously clarified concept can meet every demand on the concept of randomness that our scientific intuitions place on it. this connection is neither necessary or sufficient. that the process underlying that sequence was random (or that an event produced by that process and part of that sequence was random). One suggestion is that perhaps we were mistaken in thinking that KCS complexity is an analysis of randomness. and that they might underlie stochastic processes and be compatible with determinism. these accounts invite us to infer. I think there is. is with process randomness. requiring that they might be used to randomise experiments. I think this proposal can satisfy each of the demands that emerge from our quick survey of scientific applications of randomness. Furthermore. Of course this conclusion only follows if there is no better filler of the role. Before I can state my analysis in full. and this may explain how these concepts became linked.

431). 431). while concurring with our conclusion that no mathematical definition of random sequence can adequately capture physical randomness.17 Typically. Secondly. and it provides at least a prima facie reason to think that a process is random. claims that ‘physical randomness’ is “roughly interchangeable with ‘indeterministic”’ (Hellman. What is true is that product randomness is a defeasible incentive to inquire into the physical basis of the outcome sequence. they can appear to give an analysis of randomness. These appeals are beside the point. the theorems depend on quite specific mathematical details of the models of the systems in question.2 R  I? The comparative neglect of the concept of randomness by philosophers is in large part due. Indeterminism here means that the complete and correct scientific theory of the process is indeterministic. Indeed. An individual model will be a particular history of the states that a system traverses (a specification of the properties and changes in properties of the physical system over time): call such a history a trajectory of the There is some psychological research which seems to indicate that humans judge randomness of sequences by trying to assimilate them to representative outcomes of random processes. Firstly. appeals are made at this point to theorems which show that ‘almost all’ random processes produce random outcome sequences. For. I think. Hellman. Any product-first conception of randomness will have difficulty explaining this clearly deep-rooted intuition (Griffiths and Tenenbaum. A scientific theory we take to be a class of models (van Fraassen. 17 14 . 9). and even the evidential status of random products is less important than it seems—on my account. 1989. far less stringent tests than von Mises or KCS can be applied that genuinely do pick out the random processes. 2004. and these details do not generalise to all the circumstances in which randomness is found.possess vM-random or KCS-random outcome sequences fails to satisfy the intuitive constraints on the class of random processes. giving such theorems very limited applicability. ch. 2004. and vice versa (Frigg. 83). insofar as these accounts do capture typical features of the outputs of random processes. 3. Such a possibility surely contradicts any claim that product randomness and process randomness are ‘extensionally equivalent’ (Frigg. even where these theorems can be established. to the pervasive belief in the pernicious hypothesis that a physical process is random just when that process is indeterministic. 2001). this presumptive inference may explain much of the intuitive pull exercised by the von Mises and KCS accounts of randomness. 1978. But this presumptive inference can be defeated. there remains a logical gap between process randomness and product randomness. some random processes exhibit highly ordered outcomes.

18 15 . while the dynamics can include any mappings between states. 1978). but these are inessential for our purposes (van Fraassen. Montague. One can certainly imagine that Newton was right. the kinds of random phenomena that chaotic dynamics gives rise to are perfectly physically possible. equivalently. it seems wrong to say that coin tossing is indeterministic.system.18 D 4 (E-M D). while the laws of coexistence and boundary conditions determine which properties can be combined to form an allowable state (for instance. an adequate understanding of randomness. 1986.19 Our definition of indeterminism made no mention of the concept of probability. 1974) Is it plausible that the catalogue of random phenomena we began with can be simply unified by the assumption that randomness is indeterminism? It seems not. The class of all possible trajectories is the scientific theory. on the other hand. must show how randomness and probability are related—hence indeterminism cannot be randomness. as on the Bohm theory (Bell. the idea gas law PV = nrT constrains which combinations of pressure and volume can coexist in a state). or of a particular genetic distribution in a population. Many of the phenomena we enumerated do not seem to depend for their randomness on the fact that the world in which they are instantiated is one where quantum indeterminism is the correct theory of the microphysical realm. A scientific theory is deterministic iff any two trajectories in models of that system which overlap at one point overlap at every point. we must at least allow for the possibility that quantum mechanics will turn out to be deterministic. or the states of soil moisture. Two types of constraints govern the trajectories: the dynamical laws (like Newton’s laws of motion) and the boundary conditions (like the Hamiltonian of a classical system restricts a given history to a certain allowable energy surface) govern which states can be accessed from which other states. I don’t think that the analysis of randomness utilised in these formal proofs is adequate. which depends on a high-level probabilistic hypothesis about the structure of mating interactions. Finally. so too with random mating. (Earman. 1989). or that creatures Some complications are induced if one attempts to give this kind of account for relativistic theories without a unique time ordering. Moreover. not low-level indeterminism. so I place little importance on these constructions. In Newtonian possible worlds. A theory is indeterministic iff it is not deterministic. 19 There are also purported proofs of the compatibility of randomness and indeterminism (Humphreys. This model of a scientific theory is supposed to be very general: the states can be those of the phase space of classical statistical mechanics. if two systems can be in the same state at one time and evolve into distinct states. 1987b). A system is (in)deterministic iff the theory which completely and correctly describes it is (in)deterministic.

predictability. a useful idealisation to pretend that a given process is indeterministic.3 and 6. I attempt to give a characterisation of randomness that is uniform across all theories. and therefore being a useful idealisation. randomness remained connected to determinism. like the concept of randomness. 4) D 5 (L D). It is to the concept of predictability that we now turn. (Laplace. 1951. 20 16 . on occasion. would be present to its eyes. it is nevertheless. while classical physics is deterministic. for when increasing conceptual sophistication enabled us to tease apart the concepts of determinism and predictability. I believe that a historical mistake still governs our thinking in this area. for it.engage in indeterministic mating: it would turn out to be something of a philosophical embarrassment if the only analysis our profession could provide made these claims correct. Classical statistical mechanics does not say that it is a useful idealisation that gas motion is random. 4 Predictability The Laplacean vision is that determinism is idealised predictability: [A]n intelligence which could comprehend all the forces by which nature is animated and the respective situation of all the [things which] compose it— an intelligence sufficiently vast to submit these data to analysis—it would embrace in the same formula the movements of the greatest bodies in the universe and those of the lightest atom. One response of behalf of the pernicious hypothesis is that.20 I think that this response confuses the content of concepts deployed within a theory. the theory is an idealisation that says gas motion is random. such as that theory being adequate for the task at hand. I think that the historical connection of determinism with prediction in the Laplacean vision can explain the intuitive pull of the idea that randomness is objective indeterminism. We must also be careful to explain why the hypothesis that randomness is indeterminism seems plausible to the extent that it does. simpliciter. rather than with its rightful partner. with the external factors that contribute to the adoption of a theory. A system is Laplacean deterministic iff it would be possible for an epistemic agent who knew precisely the instantaneous John Burgess suggested the possibility of this response to me—and pointed out that some remarks below (particularly §§4. nothing would be uncertain and the future.3) might seem to support it. and hence random. Here. as well as the past. regardless of whether those theories are deployed as idealisations or as perfectly accurate descriptions.

rather than explicitly considering it. we humans are limited in our epistemic capabilities. Given that there are worlds where prediction and determinism mesh in this way. In such cases. In what follows. 102–3).22 4. 2000. and is typically only able to distinguish properties that correspond to quite coarse partitions of the state space. with even one macrostate compatible with more than one indistinguishable microstate. A Laplacean deterministic system is where the epistemic features of some ideal agent cohere perfectly with the ontological features of that world. specification of the past macroscopic history of a system together with it present macrostate. Stone (1989).23 An infamous example of this is the bastardised notion of ‘epistemological determinism’. The first way is to try and make the epistemic capacities of the agent to ascertain the instantaneous state more realistic.1 E C  P The first kind of constraint to note concerns our ability to precisely ascertain the instantaneous state of a system. our epistemic situation is typically worse than this. we can establish that the system was in a relatively small region of the state space. 23 Note that. will help to make its present microstate more precise. The second way is to make the computational and analytic capacities of the agent more realistic. is another example of this persistent confusion (Howson. we will consider the use of this historical constraint to operate to give a more precise characterisation of the current state. In the case of classical statistical mechanics of an ideal gas in a box. Our measurement apparatus is not capable of arbitrary discrimination between different states. 21 17 . We are simply not capable of distinguishing states that can differ by arbitrarily little: one slight shift of position in one particle in a mole of gas. 22 For more on this. predictability for us and determinism do not match. Weakening the epistemic abilities of the ideal agent allows us to clearly see the separation of predictability and determinism. There are several reasons for this. see Bishop (2003). Earman (1986. Most importantly. Schurz (1995).21 There are two main ways to make the features of this idealised epistemic agent more realistic that would undermine this close connection. The unfortunately named distinction between ‘deterministic’ and ‘statistical’ hypotheses. ch.state and could analyse the dynamics of that system to predict with certainty the entire precise trajectory of the system. These trajectories may not include every point compatible with the currently observed state. At best. This is because the past history can indicate something about the bundle of trajectories upon which the system might be. temperature. it is easy to think that prediction and determinism are closely related concepts. as used by Popper (1982)—which is no form of indeterminism at all. frequently. 1). the standard partition of the state space is into regions that are macroscopically distinguishable by means of standard mechanical and thermodynamic properties: pressure. actually a distinction concerning the predictions made by theories. volume. over a relatively short interval of time.

measurement alters the state of the system. Given limits on the accuracy of such a specification. Repetition does only so much to counter these errors. (2) ∃ > 0 ∀δ > 0 ∃s0 ∈ S ∃t > 0 |s0 − s0 | < δ ∧ |st − st | > . all neighbouring trajectories within the bundle of states diverge exponentially fast. an adequate account of predictability must make reference to the actual abilities of the epistemic agents who are deploying the theories to make predictions. It would be a grave error to think that the in principle limitations are the more significant restrictions on predictions. §6). knowledge of initial macrostates. measurement introduces errors into the specification of the state. physical magnitudes are always accompanied by their experimental margin of error. no matter how fine grained. Then. A nice example of the consequences of imprecise specification of initial conditions is furnished by the phenomenon from chaotic dynamics known as sensitive dependence on initial conditions. On the contrary: prediction is an activity that arose primarily in the context of agency. in order for it to affect the system that is our measurement device. and some state s0 ∈ S . can always leave us in a position where the trajectories traversing the microstates that compose that initial macrostate each end up in a completely different macrostate. Consider some small bundle of initial states S . This has two consequences. 15. ever so slightly. 2003. giving us no decisive prediction. Firstly. where having reasonable expectations about the future is essential for rational action. or ‘error inflation’ (Smith. there are at least two states whose subsequent trajectories diverge by at least after some time t. As such. That is. In fact. pp. 167–8). our ability to predict will run out in a very short 18 . 1995. Measurement involves interactions: a system must be disturbed. An account of prediction which neglected these pragmatic constraints would thereby leave out why the concept of prediction is important or interesting at all (Schurz. §5). to predict even one more stage in the time evolution of the system demands an exponential increase in the accuracy of the initial state specification. Creatures who were not goal directed would have no use for predictions. for some systems. meaning we are never able to know the precise pre-measurement state (Bishop. We are forced to meddle and manipulate the natural world in ways that render uncertain the precise state of the system. 1998. Secondly.There is an ‘in principle’ restriction too. for some bundle of state space points that are within some arbitrary distance δ in the state space. Predictability fails. This is even more pressing if we consider the limitations that quantum mechanics places on simultaneous measurement of complementary quantities. for typically chaotic systems. How well can we accommodate this behaviour? It turns out then that predictability in such cases is exponentially expensive in initial data.

Humphreys’ (1978) purported counterexamples to the thesis that randomness is indeterminism relied on the following possibility: that the total history of a system may supervene on a single state. even if we have the computational abilities. it seems to me. 4. for our purposes. 4). An important issue for computation of predictions is the internal discrete representation of continuous physical magnitudes. This is typically because our theory yields probabilities of state transitions from one macrostate into another. This is especially pronounced when the dynamical equations governing of the system are not integrable and do not admit of a closed-form solution (Stone. This. would be prediction by cheating. Predictions of future states when the dynamics are based on open-form solutions are subject to ever-increasing complexity as the time scale of the prediction increases. then we could use the system itself as an ‘analogue computer’ that would simulate its own future behaviour. Given the very plausible hypothesis that human predictors have at best the computation capacities of Turing machines. ch. However (and this will be important in the sequel) we can predict global statistical behaviour of a bundle of trajectories. 1989). and knowledge of chaotic parameter values. What we demand of a prediction is the making of some reasonable. Bishop (2003) makes the plausible claim that any error in initial measurement will eventually yield errors in prediction.2 C C  P There may also be constraints imposed by our inability to track the evolution of a system along its trajectory.24 This combination of global structure and local instability is an important conceptual ingredient in randomness (Smith. 1998. theoretically-informed judgement about the unknown behaviour of a system—not remembering how it behaved in the past. 24 19 . 1986. this significant problem is completely bypassed by analogue computation (Earman. There is a sense in which all deterministic systems are computable: each system does effectively produce its own output sequence.) I propose that. hence the system is deterministic. If we are able (per impossibile) to arbitrarily control the initial conditions. 58–60). VI). while no computable sequence of states is isomorphic to that history.time for lots of systems of very moderate complexity of description. ch. 1998. this means that some state evolutions are not computable by predictors like us. but exponential error inflation is a particularly spectacular example. This approach We can also use shadowing theorems (Smith. predicting by consulting a reliable oracle is not genuine prediction either. we set prediction by cheating aside as irrelevant. (Similarly.

T (M. A prediction is a specific use of some prediction function by some predictor on some initial state 20 . in principle.also neglects more mundane restrictions on computations: our finite lifespan. It is more perspicuous not to attempt to explain this higher-order stochastic phenomenon in terms of lower level theories. where this is to be taken as saying that. or as an ancillary feature not of central importance to the theory. and yields a temporally-indexed probability distribution Prt over the space of possible states of the system. But it does not suffice to predict a state that specifies the precise momentum and position of each particle. whose legitimacy derives from the fruitfulness of assuming it. and an elapsed time parameter. Given that explanation and prediction are tasks performed by agents with certain cognitive and practical goals in hand (van Fraassen.3 P C  P There are also constraints placed on prediction by the structure of the theory yielding the predictions. This is part of a general point about the explanatory significance of higher-level theories. Some theories don’t repay the effort required to make predictions using them. the utility of some particular theory for such tasks will be a matter of pragmatic qualities of the theory. D 6 (P). Other theories are more simple and effective because various deterministic phenomena are treated as absolutely unpredictable. for the purposes of the explanatory and predictive tasks at hand. but it has particular force for unpredictability.4 P D Given these various constraints. I will now give a general characterisation of the predictability of a process. resources. A theory of population genetics might simply plug in the proviso that mating happens unpredictably. Consider thermodynamics. it can be effectively treated as such. or it might simply be proposed as a central irreducible explanatory hypothesis. Some features of the state are thus unpredictable because they are not fixed by the theory’s description of the state. t) takes as input the current state M of a system described by a theory T as discerned by a predictor P. 1980). A random aspect of the process is perhaps to be seen as a qualitative factor in explanation of some quite different phenomenon. on occasion. predict with certainty. 4. A prediction function ψP. This theory gives perfectly adequate dynamical constraints on macroscopic state conditions. memory and patience! 4. this is a desirable feature of theory construction. This is only of importance because. those details are ‘invisible’ to the thermodynamic state. even if those theories could.

the states in question will be macrostates. Nevertheless. 1984). these delimit the fine-grainedness of the partition of which M is a member. these fix some prediction functions as reasonable for the agents who believe those theories and have observed that evidence. The theory T gives the basic ingredients for the prediction function. it is accepted theory and current evidence which are to be taken as basic. The states are supposed to be distinguished by the epistemic capacities of the predictors. it must also be reasonable for the agent to update in that A perfect. These contextual features fix a set of prediction functions that are available to potential predictors in that context. establishing the physical relations between states of the theory accepted by the agent. on my view. is by deploying a function of a kind whose most general form is a prediction function. individuated by differences in observable parameters such as temperature or pressure. for example. and it is those reasonable prediction functions that are available to the agent in the sense I have discussed here. is the updating of credences by the predictor who conditions on observed evidence and accepted theory.25 The way such a question is answered.and elapsed time. which jointly dictate the prediction functions that are available to the predictor. A prediction is an attempt to establish what the probability is that the system will be in some other state after some time t has elapsed. The notion of an available prediction function may need some explanation. it cannot be. Availability must be a normative notion. who then adopts Prt as their posterior credence function (conditional on the evidence and the theory). These are contextual features that are fixed by the surroundings in which the prediction is made: the epistemic and computational limitations of the predictor and the theory being utilised are presuppositions of the making of a prediction (Stalnaker. the use is a retrodiction. Clearly. so that in classical mechanics. and the class of possible functions. (If the elapsed time is negative.) Let us unpack this a little. Consider a particular system that has been ascertained to be in some state M at some time. unless that agent adopts a very idiosyncratic theory. deterministic prediction is the degenerate case where the probability distribution is concentrated on a single state (or a single cell of a partition). the agent who updates by simply picking some future event and giving it credence 1 is updating his beliefs in future outcomes in a way that meets the definition of a prediction function. for example. As such. this prediction function is (most likely) inconsistent with the theory the agent takes to most accurately describe the situation he is concerned to predict. The agent P who wishes to make the prediction has some epistemic and computational capabilities. The actual prediction. that a prediction function is available if an agent could update their credences in accordance with its dictates. 25 21 . however.

This conception of prediction has its roots in consideration of classical statistical mechanics. It must also include some aspects of the microstate of the system. where we can only predict future events if we impose boundary conditions on regions spacelike separated from us (and hence outside our epistemic access). but the use of thermodynamic macrostates as a paradigm for the input state M may skew the analysis with respect to other theories. 22 . So the input state must be broader than just the current observations of the system. especially since to assume the availability of these functions is simply to reject some of the plausible computational limitations on human predictions.26 Graham Priest suggested to me that the set of prediction functions be all recursive functions on the initial data. The aim of a predictive theory is to yield useful predictions by means of a modified dynamics that is not too false to the underlying dynamics. given their other beliefs. but also in special relativity. and whether tractable functions that approximate this behaviour can be found. the prediction functions in this case will just be the dynamical equations used in the theory. whatever those might be. For instance. Other cases are more complicated. for example when we use trends in the stockmarket as input to our predictive economic models.way. to fix on a posterior probability function. and the probability distribution over final states will be concentrated on a point in the deterministic case. where the uniform initial distribution over phase space in classical statistical mechanics is unavailable. as in quantum mechanics. This might include the past history of the system. This holds not only in cases where we assume that a system is for all practical purposes closed or isolated. or given by the basic probabilistic rule in the indeterministic case (say. For some theories. the very simple prediction function for ergodic statistical mechanical systems is that the probability of finding a system in some state 26 27 I thank Adam Elga for discussion of this point. overlap the microstates s that constitute M) behave under the dynamical laws. the precise states will be ascertainable and the dynamical equations solvable. Born’s rule in elementary quantum theory).e. The relation of the dynamical equations of the theory to the available prediction functions is an important issue. we have to consider how the entire family of trajectories that intersect M (i.27 The input state M must include all the information we currently possess concerning the system whose behaviour is to be predicted. In classical statistical mechanics. so all probabilities of macroscopic outcomes are state dependent. just so as to make the set of available predictions the same for all agents. and it must include all the ingredients necessary. Sometimes we must also include relevant knowledge or assumptions about other potentially interacting systems. As Hans Halvorson pointed out to me. for example that those regions are more or less like our past light cone. But I don’t think we need react quite so drastically.

which enables a large body of statistical methodology to come to bear on the use and role of predictions. Such a function. This requires a great many assumptions and simplifications. 4. But the tradeoff between accuracy and fine-grainedness will depend on the situation in hand. The general constraints seem to be those laid down in the preceding subsections. and each theory will have different requirements. Importantly.28 5 Unpredictability With a characterisation of predictability in hand. while perfectly accurate. ergodicity prominent among them. This is essential in explaining how predictions give rise to action. The probabilities are matched with the credence by means of a probability coordination rule.M after sufficient time has elapsed is the proportion of the phase space that M occupies. 23 . is that the probability distribution given by the prediction function will serve as normative for the credences of the agents making the prediction (van Fraassen. In any case. whether any function that meets these formal requirements is a useful or good prediction function is another matter. but no more detailed universal recipe for producing prediction functions can be given. A given prediction function can yield a distribution that gives probability one to the whole state space. the particular form of prediction functions is a matter for physical theory. especially ch. and is one important reason why the outcomes of a prediction must be probabilistic. The ultimate goal. Prediction can then be described as yielding expectation values for some system given an estimation of the current values that characterise the system. in which a phenomenon can fail to be predictable. I presume that the predictor wishes to have the most precise partition of states that is compatible with accurate prediction. An event E (at some temporal distance t) is unpredictable for a predictor P iff P’s posterior credence in E after conditioning on current evidence and the best prediction function available to P is not 1—that is. In particular. is pragmatically useless and should be excluded by contextual factors. it turns out that being indeterministic is one way. but no information about probabilities over any more fine grained partition. 1989. 1980). of which the Principal Principle is the best known example (Lewis. the logical properties of such a function are those I have specified above. see Jeffrey (2004). 198). but not the only way. Of course. Another is that we can easily convert a probability distribution over states into an expectation value for the random variable that represents the unknown state of the system. since we have separated predictability from determinism. we are in a position to characterise some of the ways that predictability can fail. of course. 28 For a start. D 7 (U).

) It is therefore readily understood that common use of the concept of unpredictability should diverge from the letter. there are greater or lesser degrees of unpredictability. for example. §6). and this obviously simplifies the actual nature of our epistemic relationship with the Note. by coming to believe the theory with certainty. just because some outcome turns up more often. even though it does not convey certainty on the event. This worry can be assuaged by attending to the following two considerations. Often. it remains true that our conditional credences concerning events. capture the important credential states as far as predictability is concerned. conditional on the simple theories we use to predict them. Some of what I have to say here about pragmatic factors involved in prediction reflects the complexities of this matter. If the available well-confirmed prediction function allows us to considerably raise our posterior credence in the event. We still can’t tell what the next toss will be to any greater precision than the bias we might have deduced. But we nevertheless refrain from full certainty because we are not certain of the simple theory. in everyday circumstances. that this definition does not make biased sequences any more predictable than unbiased ones. and yet reserve our assent from full certainty in the predictions made.if the prediction function yields a posterior probability distribution that doesn’t assign probability 1 to E. this definition captures the idea that an event is not perfectly predictable. we must recognise that when we are prepared to use a theory to predict some event. Our belief in and acceptance of theories is a complicated business. hence it remains unpredictable. to make little distinction between certainty and very high non-unity credences. what we express by that is some degree of uncertainty regarding the theory. we might well be willing to credit it with significant predictive powers. Secondly. conditional on the truth of those theories. of the definition given above. for example. but I suggest not the spirit. Unpredictability has to do with our expectations. and in cases of a biased coin we do expect more heads than tails. because conditional credence in the events is 1. and the kind of unpredictability we shall call randomness (below. 29 24 . there are many future events that are intuitively predictable and yet we are not certain that they will occur. in passing. and we frequently make use of and accept theories that we do not believe to be true.29 There is some worry that this definition is too inclusive—after all. So. (This is at least partially because the structure of rational preference tends to obscure these slight differences which make no practical difference to the courses of action we adopt to achieve our preferred outcomes. many events are predictable according to the definition above. we are willing to collapse some of these finer distinctions: we are willing. This only indicates that between perfect predictability. Firstly. But regardless of our final opinion on acceptance and use of theories. The point is that prediction as I’ve defined it concerns what our credences would be if we discharged the condition on those credences.

theories we accept. The way that the theory represents some situation s is the same as the theory represents some distinct situation s . The limited capacities of epistemic agents to detect differences between fundamental detailed states. for whom the contextually salient partition on state space individuates single states. they may be forced to rely on simplified or approx25 . while relative to a predictor. Note that one and the same type of event can be predicted at one temporal distance. Then there will exist events that do not get probability one and are hence unpredictable. Theories which describe unpredictable phenomena. but the way the theory represents the future evolutions of those states t(s) and t(s ) are distinct. the strongest form of unpredictability. An event is unpredictable for such an agent just in case knowledge of the present state does not concentrate posterior credence only upon states containing the event. If the correct theory of some system is indeterministic. If the agent does not possess the computational capacities to utilise the most accurate prediction functions. It is the epistemic and computational features of predictors that fix the appropriate theories for them to accept—namely. This situation provides another perspective on the continued appeal of the thesis that randomness is indeterminism. and who believes the correct theory. An illustration of the definition in action is afforded by the case of indeterminism. It is important to note that predictability. In other words. so that within the theory we have duplicate situations evolving to distinct situations. is a theoretical property of an event. Indeed. on this account. then those events will be unpredictable. It is easy to see how the features that separate prediction from determinism also lead to failures of predictability. If the theory is genuinely indeterministic there exist lawful future evolutions of the system from the current state to each of incompatible future states S and S . If there is any event true in S but not in S . if an indeterministic theory countenances any events that are not instantiated everywhere in the state space. It is the available prediction functions for some given theory that determine the predictions that can be made from the perspective of that theory. and provide prediction functions which are well-matched to their computational abilities. that event will be unpredictable. treat those phenomena as indeterministic. lead to the possibility of diverging trajectories from a single observed coarse state even in deterministic systems. in different epistemic communities. and unpredictable at another. then we can imagine an epistemic agent of perfect computational and discriminatory abilities. the level of resolution and the allowed computational expenditure are parameters of predictability. predictors accept theories which partition the state space at the right level of resolution to fit their epistemic capacities. if the diverging trajectories require some extended interval of time to diverge from each other. and hence their limitation to relatively coarse-grained partitions over the state space. and there will be different characteristic or typical parameters for creatures of different kinds.

and this can lead to failures of predictability in much the same way as epistemic restrictions can (even though the agents might have other. To avoid rejecting prediction functions that make incorrect but close predictions. then either (contra the assumption) the prediction function is not a simplification or approximation at all. because the agent has restricted the range of available prediction functions to those that are provided by the theory subject to the agent’s epistemic and computational limitations.imate methods. if the events are distinguished in a fine-grained way. Finally. if those initial states are statistical-mechanically distinguishable). produce randomness. If these techniques do lead to predictions of particular events with certainty. These failures of prediction do not. For instance. if the agent accepts a theory for pragmatic reasons. derived from pragmatic and objective constraints on human knowledge. then that may induce a certain kind of failure of predictability. An agent who uses thermodynamics as his predictive theory in a world where classical statistical mechanics is the correct story of the microphysics thereby limits her ability to predict outcomes with perfect accuracy (since there are thermodynamically indistinguishable states that can evolve into thermodynamically distinguishable outcomes. pragmatic. those functions should be made compatible with the observed outcomes by explicitly considering the margins of error on the approximate predictions. I think that these epistemic features. Then the outputs of such functions can include the actual outcome. intuitively. even if not perfect. As I discussed earlier. So what kind of unpredictability do I think randomness is? 26 . then we may have a very precise and accurate prediction. then we have unpredictability with respect to those events. or the predictions will be incorrect. reasons for adopting those partitions as salient—for instance. as well as various small deviations from actuality—they avoid conclusive falsification by predicting approximately which state will result. The views suggested by Suppes and Kyburg in the epigraphs to this paper provide some support for this proposal—philosophical intuition obviously acknowledges some epistemic constraints on legitimate judgements of randomness. and the prediction concentrates the posterior probability over only two of those events. and the prediction functions should be rejected. the explanatory value of robust macroscopic accounts). Theories also make certain partitions of the real state space salient to predictors (the socalled level of description that the theory operates at). exhaust the concept of randomness. 6 Randomness is Unpredictability We are now in a position to discuss my proposed analysis. some events which satisfy my definition of unpredictability are only mildly unpredictable. If such approximate predictions can include at least a pair of mutually exclusive events.

since any useful prediction method aims to uncover a significant correlation between future outcomes and present evidence. If a fix is nevertheless thought to be necessary. the posterior probability distribution (conditional on theory and evidence) is identical to the prior probability distribution. 31 At this point. is equal to the prior probability of E.4). It also counts as predictable. which is the same as its unconditional probability.30 That is. So rainfall inputs constitute a random process because the timing and magnitude of each rainfall event is random. by extension. conditional on theory and current evidence (the current state of the system). a sequence of outcomes is random just in case those outcomes are the outcomes of a random process. . This connection between unpredictability and probabilistic independence is in large part what allows To connect up with our previous discussions. because all of the probability measure is concentrated on the one possible state. . it is worth addressing a putative counterexample raised by Andy Egan. which gets probability one. A process with only one possible outcome is random on my account: there is only one event (one cell in the partition). recall the discussion of the trivial prediction function above (§4. and is explicitly included in the definition of unpredictability. this doesn’t seem ad hoc. En } partition the event space. are the successive tosses of a coin: independent trials. identically distributed because the theory which governs each trial is the same. . I am perfectly happy with accepting this as an obviously degenerate and unimportant case. must be equal to P’s prior credence in E conditional only on theory. on this construal. and the current state is irrelevant to the next or subsequent trials—a so-called Bernoulli process. 30 27 . . The characteristic random events. way to characterise randomness: a random event is probabilistically independent of the current and past states of the system.The following definition captures my proposal: randomness is maximal unpredictability.31 This definition and its extension immediately yields another. conditional on current evidence. An event E is maximally unpredictable for P and T iff the posterior probability of E. very illuminating. This also means that P’s posterior credence in E. An event E is random for a predictor P using theory T iff E is maximally unpredictable. yielded by the prediction functions that T makes available. This is perfectly compatible with those outcomes being a very regular sequence. We may call a process random. given the probabilities supported by the theory (when those current and past states are in line with the coarsegraining of the event space appropriate for the epistemic and pragmatic features of the predictor). D 8 (R). it is merely unlikely to be such a sequence. which would give probabilistic dependence between outcomes and input states. I would opt simply to require two possible outcomes for random processes. since the outcomes of a process {E1 . But the idea of randomness as probabilistic independence is of far wider application than just to these types of cases. if each of the outcomes of the process are random.

33 It is a central presupposition of my view that we can make robust statistical predictions concerning any process. not independence. mixing. there are a number of processes for which a strict probabilistic independence assumption fails. where the increasing strength of the ergodic. (This also helps us decide not to classify as random those processes which are unpredictable in the limit as t grows arbitrarily. but which are remarkably regular and predictable at the timescales of human experimenters. This would correspond to the event in question not even being part of the partition that the prediction function yields a distribution over. would be a counterexample to my analysis. 1993. random or not. but rather by a Markov process. Weather is not best modelled by a Bernoulli process. and taking this as the definition of randomness would have the unfortunate effect of misclassifying partially unpredictable processes as not random. Independence of events in a system is likely only to show itself over timescales where sensitive dependence on initial conditions and simplified dynamics have time to compound errors to the point where nothing whatsoever can be reliably inferred from the present case to some quite distant future event.) That the commonsense notion of randomness includes such partially unpredictable processes is a prima facie reason to take unpredictability. though over long time scales the weather is quite unpredictable. However. from day to day the weather is more stable: a fine day is more likely to be followed by another fine day. it is implausible to think that any prior probability for the space invasion is reasonable—not even a zero prior.34 One of the hallmarks of ranCompare the hierarchic of ergodic properties in statistical mechanics. Indeed.32 The use of ‘random’ to describe those processes which may display some short term predictability is quite in order. and if there is no distribution. one where the probability of an outcome on a trial is explicitly dependent on the current state. 33 Further evidence for this claim is provided by the fact that probabilistic independence is an all-or-nothing matter. the event is trivially not random. to be the fundamental notion—though nothing should obscure the fact that probabilistic independence is the most significant aspect of unpredictability for our purposes. 235–40). The event should be completely unexpected. 34 Is there ever randomness without probabilistic order? Perhaps in Earman’s space invader case. probably most natural processes are not composed of a sequence of independent events. and Bernoulli conditions serves to shorten the intervals after which each type of system yields random future events given past events (Sklar. that is. I think we can amend the definition so as to capture this case. and should not even be included in models of the theory. add a clause to the definition of predictability requiring there to be some prediction function which takes the event into consideration. and that for quite reasonable timescales these processes can become unpredictable. This. since that analysis requires a probability distribution over outcomes. For example. once we recognise the further contextual parameter of the temporal distance between input state and event (or random variable) to be predicted. as it stands. 32 28 . I regard it as a significant argument in favour of my account that it can explain this close connection.our analysis to give a satisfactory account of the statistical properties of random phenomena.

dependent on properties of agents and the theories they use. but the overall distribution of properties over the individuals in that sample is supposed to be representative of the frequencies in the population as a whole. so no theory can provide the resources for full Martin-Löf randomness and still remain predictive. that while ascriptions of randomness are sensitive to the requirements of the agents who are using the concept and making the ascriptions. where those capacities are perfectly objective features of the predictors. If we take the patterns to be provided by some potentially predictive theory. the details of each mating pair are not predictable. then. yet relativised to a set of statistical tests that can be actually applied to yield substantive information about the genesis and behaviour of a random process. Randomness is thus an extrinsic property of events. An event is random. if agents know their epistemic limitations. This observation will become important below (§6. just if it is unpredictable.36 This means. when discussing whether randomness as I have defined it is subjective. just in case these objective features of the agents in question render that event unpredictable. That is just one additional reason why randomness can correctly be assigned even in cases of perfect determinism. but the overall rates of mating between parents of like genotype is governed by the frequency of that genotype in the population. but the use of which lies outside their capabilities. by the theories it is (objectively) appropriate for those agents to utilise. Nevertheless the spirit of the statistical test proposal remains. This is crucial for the many scientific applications of randomness: random selections are unpredictable with respect to the exact composition of a sample (the event). therefore. since the expectations of the variables whose values describe the characteristics of the event are well defined even while the details of the particular outcomes are obscure prior to their occurrence. except to creatures with computational abilities quite unlike our own. This illuminates the common ground my proposal shares with Martin-Löf’s statistical testing view of randomness. In random mating. they may know of deterministic theories which can correctly account for the phenomena. For the theory provides no resources to reject the hypothesis that the only structure governing the sequence is pure chance. then failing statistical tests is equivalent to being unpredictable with respect to that theory. if the best theoretical representation of that event relative to a given predictor leaves the probability of that event unchanged when conditioned on the current state and the laws of the theory. 36 Of course.35 I wish to emphasise again the role of theories.dom processes is that these are the best reliable predictions we can make.3). that is. 35 29 . An event is random. But a potentially predictive theory will not have infinitely many concurrent predictions for a single predictor or group of predictors. We should give a naturalised account of the best theory relative to a predictor: that theory should be the one that maximises fit between the epistemic and computational capacities of the predictors and the demands on those capacities made by the theory. they are nevertheless objectively determined.

if I am convinced that my opponent is an agent who reasons and plans. I cannot predict what his moves will be. and not a particular individual. Of course. the theories which are directly available to me are not sufficient to enable me to predict that behaviour.1 C   D  R The definition of randomness might be further clarified by close examination of a particularly nice example that Tim Williamson proposed to me. This serves to reinforce the point that the ’availability’ to me of a 30 . Thankfully. Williamson’s example was as follows: let us suppose that I regularly play chess against an opponent who is far superior to me. but instead reflects the intersubjective consensus on the capabilities of that theory. there exist at least three lines of response to this example. even if not by me. It is essential to note that judgements of predictability will typically be made by an epistemic or scientific community. Prima facie. that would pose a serious problem for the view. no theory I will accept will have the consequence that his chess-playing behaviour is entirely random. It is communities which accept scientific theories. This leads to consideration of a second point.6. Since the relevant bearer of a predictive ability is an epistemic community. What we have in this case is not sufficient for randomness because we will never accept that the goal-directed activities of a rational agent are genuinely unpredictable. actual frequencies of outcomes) of a phenomenon which dictate how it will be represented by theory. and to predict it on the basis of the current state of play. nor are those behaviours really probabilistically independent of preceding states: I certainly regard my opponent as being in a position to predict his own behaviour.e. note that unpredictability is theory relative. each of which illuminates the thesis that randomness is unpredictability. Not only does he beat me consistently. Firstly. It is not only the statistical aspects (i. not an individual. in this situation. it may appear that my proposal is committed to classifying his moves as random. and the capabilities and expertise of each member of the community contributes to its predictive powers. if true. Indeed. a phenomenon is judged random with respect to a community of predictors. he beats me without my being aware at the time of his strategy and without my being able to anticipate any but the most obvious of his moves. My chess playing opponent and myself are presumably members of the same scientific community and the theories we jointly accept make his chess playing predictable—he knows the theory while I accept his authority with respect to knowing it and judge his play predictable. we will never regard these apparently probabilistic outcomes as indicative of genuine probabilistic independence (since genuine probabilities have a modal aspect not exhausted by the actual statistics). This is because the set of available prediction functions in a given theory does not reflect merely personal idiosyncrasies in understanding the theory.

but one can in retrospect assimilate them to an account of the opponent’s strategy. it is clear that contingencies of ignorance shouldn’t lead us to count something as random. depend on whether those agents are actually in possession of that theory. it is a kind of (pragmatically/cognitively/theoretically) necessary lack of predictive power which makes an event random. Obviously it would be inappropriate to criticise past predictors on the grounds that they made internally warranted judgements of randomness that were false by the lights of theories we currently possess. though in retrospect I can see that I had no robust external warrant for that judgement. But that judgement was at best preliminary and 31 . of course. It does not. where which theory counts as best is partially dictated by the cognitive and pragmatic parameters appropriate for some community of agents. One could. in retrospect. Moreover. For this is exactly the kind of situation where one might be frequently surprised by the moves that are made. though it may have appeared random at that time. As such. given the kind of person I am and the kind of community I inhabit. On the other hand. That is. This last response illustrates a point that may not have been clear from the foregoing discussion: no mention was made. I can then realise that his behaviour was not random. to capture those uses of the term ‘unpredictable’ that reflect the ignorance and incapacity of a particular individual. but rather of what theories count as normative for my judgements. and hence develop better accounts of the nature of his chess playing. and had made judgements of predictability which were at variance with that theory. nor any further useful application in the analysis of randomness or other concepts. That is. A third response also undermines the claim that this chess player’s moves are unpredictable. To turn back to the post facto analysis of my opponent’s play: while playing I made a (perhaps) warranted judgement that it was random. I can come to develop my understanding of that play. That is the sense in which theory-relative judgements of predictability are supposed to be normative for agents of the kind in question.theory. is not a matter of what’s in my head. randomness is relative to the best theory of some phenomenon. define a concept of ‘personal unpredictability’. of any temporal conditions on the appropriateness of predictive theories for agents. it is true that they would deserve censure had it been the case that they were in possession of the best theory of some phenomenon. it may have been (internally) epistemically acceptable for me at the time to judge his behaviour as random (setting aside for the time being the preceding two responses). But—and this merely underscores the importance of the communitarian concept—such a personal unpredictability would have little or no claim to capture the central uses of the term ‘unpredictable’. therefore. in the definition or its glosses. while playing I operated with a theory which was not sufficient to make accurate predictions concerning my opponent’s behaviour. or of a prediction function. and upon due consideration of his play.

38 Dorothy Edgington urged me to address this concern. no matter how unpredictable it is.4–8. §§8. events that are genuinely random do not contribute in this way further illumination of the process of which they are outcomes: no after the fact analysis of a random event will make greater predictive power available to me or my epistemic brethren. therefore.38 Obviously. As such. and hence our world appears quite capricious and arbitrary. This proposal seems to avoid the problem only by stipulation. If we accept such a theory. whose motives and reasons are quite beyond our ken. and I propose cannot be seen as purposive in the way required for the example to have any force.5)—we have the minimum necessary ingredients for potentially correct judgements of randomness. I prefer. and hence predictively. credences play a central role in attributions of randomness. and (ii) that the best theory of such events that we might ever possess will classify them as random. it cannot be random. if the event in question is the outcome of some process of rational deliberation. they are not rationalisable. In one sense this is a simple corollary of the fact that the Bernoulli process is the paradigm random process. for it is clear that it would be in principle possible for me to come to possess (or to defer to an expert’s possession of) a good predictive theory of that play. 37 32 .defeasible. Consider the hypothesis that our world is run by an omnipotent and completely rational deity. However. This seems to me a genuine problem (though there is some question about its significance). and outcomes of such a process may well serve as evidence undermining the warrant for the judgement. whose credence function is suitably deferential to expert credence functions (van Fraassen. to suggest that any event which can be rationalised (as the act of a recognisably rational agent) will be predictable.37 6. and outcomes in such a process are probabilistically. But in another it provides an important illustration of the application of the definition of randomness—judgements of randomness can be incorrect though warranted. One way around it might be to simply add a condition to the definition that. By contrast. we must accept both that (i) the events in the world have reason and purpose behind them. as it is only by way of updating credences that theories yield actual predictions. as long as an agent has credences that could be rationally updated in accordance with the best theory for the community of which that agent is a member—that is. and that therefore if this deity’s actions are genuinely unpredictable. being the outcomes of a process of rational deliberation by a reasonable agent. and hence to recognise the sense behind what appeared wrongly to be random play. there is a further question concerning whether there are other kinds of objective probabilities (‘chances’) which are disclosed by the Williamson’s example does point to a difficulty. however. independent. 1989. Actual judgements of randomness approach correctness as the actual updating of credences more closely approximates the updating that would be licensed by possession of the best theory.2 R  P One may be wondering what kind of interpretation of probability goes into the definition.

81) calls the ‘fundamental question about chance’. via something like the Principal Principle (Lewis. given that all the agent does is accept some theory which features probabilistic models. That is. no matter which account (if any) turns out to be correct. Our epistemic account of randomness therefore provides a robust and novel explanation of the applicability of probabilistic theories even in deterministic cases. that is. I hope that the account is neutral on this important issue. as well as being internally important to the project of understanding scientific theories that use the concept. that does not hold for the probabilities postulated by those theories. which I take to be an uncontroversial but difficult standard to meet. which already mentions probability. In fact. it can simply be slotted into this interpretation of randomness. 39 33 . on pain of circularity he cannot use this definition of randomness. without having to mount the difficult argument that there are objective chances in deterministic worlds. and hence randomness is partially epistemic. although perhaps less easily. and to shape expectations concerning the way the world will turn out. Randomness then has important metaphysical consequences for the understanding of chance. Just because predictability is partially epistemic. the only requirement that my account of randomness makes on an interpretation of probability is that an account must be given of the content of probabilistic models in scientific theories. but subjectivist accounts must also be able to do so. I completely agree: random phenomena are frequently characterised by the fact that they can typically be given robust probabilistic explanations. Our epistemic stance mandates the use of probabilistic theories. But even if the grounds we have for applying probabilistic theories lie in our own cognitive incapacities. doesn’t mean that the probability governing the distribution of predictions is epistemic. particularly in terms of the probabilistic independence of certain events and certain initial data. 1980). So our cognitive capacities and pragmatic demands lead to the suitability of treating phenomena as random. modelling them probabilistically. Von Mises’ discussion of randomness was motivated by his desire to find firm grounds for the applicability of probability to empirical phenomena. the interpretation must explain what feature of objective probability allows it to influence credence. Perhaps the only account that the view is not compatible with is von Mises’ original frequency view: since he includes randomness as part of the definition of probability.39 Most naturally it might be thought that an objective account of probability could meet this demand. and I hope that. the connection between the probabilities in Such a demand is tantamount to requiring the interpretation of probability be able to answer what van Fraassen (1989. and without sacrificing the objectivity of genuine probability assignments by adopting a wholesale subjectivist approach to probability.theories in question and count as normative for the credences of the predictor.

whether a process is predictable or not follows immediately. Or consider a fungus.3 S  C-S  R I have emphasised repeatedly that predictability is in part dependent on the properties of predictors. and given a background theory. then. nothing is random.those theories. it might seem misleading to label the present account ‘subjective’. rather than to predict and anticipate. Secondly. and the credences implicit in our epistemic states. must be to some extent subjective and context-sensitive. and that purely personal and subjective features of those agents do not play a significant role in judgements of randomness. that judgements of predictability. When we recognise that these parameter values do not vary freely and without constraint from agent to agent. It does not seem quite right to call 40 John Burgess made this point. Laplacean gods presumably have more powerful predictive abilities than we do. I wish now to defuse these worries. It may also seem quite worrying that a subjective concept may play a role in successful scientific theorising.1 that two epistemic agents cannot reasonably differ merely over whether some process is unpredictable or random. Almost everything is random for the fungus. but are subject to norms fixed by the communities of which agents are a part. and hence of randomness.40 For. 6. it seems that rational agents can’t easily disagree over randomness. and which therefore deserves to be adopted as the best predictive theory. Firstly. is by no means direct and straightforward. given values for the parameters of precision of observations and required accuracy of computations. Given these qualifications. it evolved merely to respond to external stimuli. it is a consequence of our remarks in §6. in virtue of adopting different theories or having different epistemic capacities or pragmatic goals. another with different capacities and different theories may not be able to predict. What one epistemic agent can predict. perhaps for such gods. there will be reasonably straightforward empirical tests of the predictive powers of that predictor who claims the process is not random. There is a worry that this subjectivity may seem counterintuitive. It should be quite unexceptional that agents who differ in their broader theoretical or practical orientation may differ also in their judgements of the predictability of some particular process. If they rationally disagree over the predictability of some phenomenon. 34 . they must be members of different epistemic communities. It may appear. This disagreement will then be resolved if one takes these empirical results to indicate which theory more correctly describes the world. with quite limited representational capacities and hence limited predictive abilities.

Perhaps when stated so baldly the context-sensitivity of randomness might seem implausible. these observations make it clear that it is a quite unusual form of subjectivity. Like other folk monadic properties. There are. Given that. in statistical testing we frequently wish to design random sequences so that they pass a selected set of statistical tests. a judgement of randomness may be appropriate. simply to make clear where my proposal might fall in a taxonomy of analyses of randomness. even though that theory is false. no special problem of context sensitivity arises for randomness that is not shared with other theory-dependent concepts. or as grounded in an objective relation between an event and an agent with certain objective behavioural dispositions. randomness can be analysed as a relation with the other terms fixed contextually. In the case of predictability and randomness.3). and this would be manifestly inadequate to explain how the concept is used in diverse branches of scientific practice (§1). and as such are capable of having further scientific significance. we wish to use an effectively predictable phenomenon to produce sequences which mimic natural unpredictability by being selective about which resources (which statistical tests) 35 . However. And insofar as we remain content to classify predictability as subjective. The context sensitivity of randomness is more intriguing. But in this sense few concepts are truly subjective. As such. I have argued (§4.predictability ‘subjective’ simply because agents with opposed epistemic abilities and commitments may reasonably disagree over its applicability. Such an account would not be adequate. Here the possibility arises that. therefore. the relevant theoretical possibilities are delimited by a theory that in other contexts would be repudiated as inadequate. even though the phenomenon in question might be predictable using another theory. or as a relation. For instance. for randomness and predictability are clearly not applied on a purely personal basis. In effect. in order to make the semantics go more smoothly and to make the connection with past uses of the term clearer. it is features and capacities of the predictor that fill in the ‘missing’ places of the relation. Consider so-called ‘subjective probability’. I would be inclined to claim that randomness is a partially (inter-)subjective property. contexts where thermodynamic reasoning is appropriate. but nothing of significance really turns on this. it is a matter of choice whether we decide to analyse the term as a predicate with subjective aspects. In such contexts. the natural alternative would be an invariantist account of randomness. randomness and predictability are only context sensitive in virtue of the fact that theory-acceptance is very plausibly context sensitive. in a given context. Furthermore. primarily because one would have to give a theory-independent account of randomness. arbitrarily and without rational constraint. which can be analysed either as a subjective property of events. In a similar fashion we typically use the standard subjective analysis of probability.

and if it is capable of bearing the weight of the scientific uses to which it will be put. 7 Evaluating the Analysis I think the preceding section has given an intuitively appealing characterisation of randomness. weather prediction. Note that the proximate explanation of the explanatory success of randomness. in that theoretical context. Randomness by design. however. The fact that these models are borne out empirically vindicates our methodology. then. noise and error correction. As such. and coin tossing. we didn’t have to show that rainfall was genuinely completely ontically undetermined in order to do good science about the phenomenon in question. remains unified across both the deterministic and indeterministic situations—a desirable feature of my proposal. is randomness that arises from our adoption of empirically successful theories. Randomness therefore shares in whatever explanatory power that indeterminism demonstrates in those cases. then. such as the pragmatic or epistemic features of agents who accept the probabilistic theories. But in selectively excluding certain non-random sequences we do not thereby adopt some new notion of ‘pseudo-randomness’ that applies to the sequences that remain. to recognise that the appropriate theory for that phenomenon. I maintain that unpredictability is perfectly able to support the explanatory strategies we examined in §1. we merely need to use processes unpredictable 36 . randomness simpliciter. deriving from unpredictability. for example. Those remaining sequences play precisely the theoretical role that random sequences play. phenomena will be unpredictable at least partly in virtue of that indeterminism. We certainly wish to explicitly characterise some ordered and regular outcome sequences as those a genuinely random process should avoid. they license exactly the same statistical inferences. In deterministic cases. classifies those phenomena as genuinely random. Dembski (1991) sees a fundamental split here between a deterministic pseudo-randomness and genuine randomness.we shall allow the predicting agents to use. will be if it is able to meet the demands we set down at the beginning of §2. The best argument for the analysis. our account of the explanatory success of randomness must ultimately rest on something else. Better. I reject this split: accepting it would involve a significant distortion of the conceptual continuity between randomness in deterministic theories and randomness in indeterministic theories. In indeterministic situations. in many cases. prediction of some phenomenon can only be achieved by exceedingly subtle and devious means. which is to say. these phenomena are best treated as a random input to the system. in particular. Our cognitive capacities are such that. This is similarly the case with random mating. In random sampling (and game theory).

nevertheless possible. If this is the hallmark of random phenomena. in paradigm cases.by our opponents or by the experimental subjects to get the full benefits of the statistical inference: if they are forced to treat the process as random. since those sequences do not need to have some known genuine indeterminacy in their means of production in order to ground the statistical inferences we make using them. but precisely when and where it rains is random. How does our analysis of randomness as unpredictability do on our four demands (§2)? (1) Statistical Testing Sequences that are unpredictable to an agent can be effectively produced. Rainfall events have a definite probability distribution. it is the probabilistic independence of the choice of test subjects from the choice of test regimes that allows for the application of significance tests on experimentally uncovered correlations. The defining feature of the scientific theories at which we looked in §1 is the presence of exact and robust statistical information despite ignorance about the precise constitution of the individual events. as well as finite processes. (2) Finite Randomness Single events. With respect to random sampling. while difficult. 37 . unpredictability (and hence randomness) involves the probabilistic independence of some predicted event and the events which constitute the resources for the prediction. the fact that mating partner choice is probabilistically independent of the genetic endowment of that partner that allows the standard Hardy-Weinberg law to apply. We should only rule out predictable sources of correlation other than the one we wish to investigate. It is relaxation of this independence requirement that makes for non-random mating. then we can easily see why the particular probabilistic version of unpredictability we used to define randomness is appropriate. can be unpredictable. Subjecting the process to a finite battery of statistical tests designed to weed out sequences that are predictable by standard human subjects is. no account of randomness should wish to eliminate the possibility. then any skill they demonstrate in responding to that process must be due to purely intrinsic features of the trials to which they are responding. This probabilistic aspect of randomness and unpredictability is crucial to understanding the typical form of random processes and their role in understanding objective probability assignments by theories. In such cases. one can easily see how the inferential strategies we have identified are legitimate. Indeed. Correlations between the test subjects and the random sequence can still occur by chance. In the case of random mating. but since there can be no a priori guarantee that could ever rule out accidental correlations even in the case of genuinely indeterministic sequences.

It certainly does better than its rivals. and most demanding test would be to see how the account works in particular cases: how. (4) Determinism Unpredictability occurs for many reasons independent of indeterminism. when knowledge concerning the male individuals is restricted to their heritable properties (which of course are the significant ones in a genetic context). or given facts about location or opportunity. my proposal is that a correct account of trilobite mating would show that there is no way for us to predict (retrodict) better than chance which mate would be (was) chosen.(3) Explanation and Confirmation A probabilistic theory which classifies some process as random is. it captures enough of our intuitions to truly deserve the name.41 In outline. The key to explaining why randomness and indeterminism seem closely linked is that the theories themselves should not be deterministic. The final.) This account of how to apply 41 I owe the question. 1993. as a whole. gives us the features that the intuitive concept demands. these properties are not genetic. such as the knowledge concerning which male actually did succeed in mating with this individual. without sacrificing its scientific applicability. and do not conflict with the assumption of random mating. 38 . given extra knowledge. this would be strongly disconfirmatory of randomness. Thus we can still have random sequences in deterministic situations. to John Burgess. and is compatible with determinism. to cash out the hypothesis that the mate of a female Cambrian trilobite was chosen at random from among the male members of her population. ch. we can predict better than chance which male would be successful. as the evidence is more or less characteristic of an unpredictable process. This entails that there is no probabilistic dependence between possession of a certain phenotype and success in mating. they can surely form part of excellent statistical explanations for why the processes exhibit the character they do. for example. and the illustrative example. Moreover a particular statistical hypothesis which states that the process has a unpredictable character can also be incrementally confirmed or disconfirmed. there seems no reason why it could not effectively substitute in explanations as well. 14). even if they are acceptable accounts of ontically deterministic situations. We have gone to some lengths above to show that unpredictability can fill in quite adequately for randomness in typical uses. When confirmed. even without them. Analysing randomness as unpredictability. there seems no reason why such theories or hypotheses cannot also possess whatever features make for good explanations. (Of course. and as part of theories that supervene on deterministic theories. amenable to incremental confirmation (Howson and Urbach. I maintain. A special case is when the phenomenon is predicted better than chance.

Alan Hájek. 58: pp. Robert C. and Princeton. Foundations of Physics. Chaitin. London: Oliver and Boyd. In Bell (1987c). 201–12. Adam Elga. Earman. A. Particular thanks to Paul Benacerraf. 75– 106. 39 . pp. — — (1987b). “Chaos and Algorithmic Complexity”. 26: pp. 47–52. but I hope it is clear how the proposal might apply to other cases. 307–36. Reidel. Scientific American. The Design of Experiments. “Randomness By Design”. Bell. (1935). Lizzie Maughan and Bas van Fraassen for comments on earlier written versions. “Randomness and Mathematical Proof”. Fisher. Alonzo (1940). John (1986). “On Separating Predictability and Determinism”. Quantum Mechanics and Experience. S. Gregory (1975). vol. Dordrecht: D. “Are there Quantum Jumps?” In Bell (1987c). Bulletin of the American Mathematical Society. Dembski. (1991). 46: pp. 232: pp. Church. 25: pp. R. (1996). —— — — (1987c). 169–88. vol. (1987a).the theory must remain a sketch. H. Hans Halvorson. Bibliography Albert. and White. (1992). (2003). John Burgess. David Z. Acknowledgements Thanks to audiences at Oxford. the 2002 AAP in Christchurch. Cam—— bridge: Cambridge University Press. pp. vol. “On the Concept of a Random Sequence”. 1 January. vol. R. 130–135. MA: Harvard University Press. 2005. William A. A Primer on Determinism. Bishop. Erkenntnis. Leeds. J. Batterman. Cambridge. Speakable and Unspeakable in Quantum Mechanics. Noûs. “On the Impossible Pilot Wave”. vol. 159–68.

(2004). C. Howson. pp. 369–376. 3 ed. (1963). Hume’s Problem: Induction and the Justification of Belief . A Primer of Population Genetics. 2. Thomas L. 411–34. “Unified Dynamics for Microscopic and Macroscopic Systems”. A. Roman (2004). G. 55: pp. Hartl. PSA 1978.). (2000). “Randomness and Reality”. 2 ed.. (1978).Frigg. “Algorithms and Randomness”. SIAM Theory of Probability and Applications. vol. Reidel. “In What Sense is the Kolmogorov-Sinai Entropy a Measure for Chaotic Behaviour?—Bridging the Gap Between Dynamical Systems Theory and Communication Theory”. PSA 1978. Colin and Urbach. vol. Physical Review D. G. T. Griffiths. Chicago: Open Court. Kyburg. Subjective Probability (The Real Thing). N. and Weber. A. Sankhya.). The Structure and Interpretation of Quantum Mechanics. The Logical Foundations of Statistical Inference. vol. 98–113. In Peter D. 79–97. Henry E. N. 34: p. Hellman. Richard C. (1988). (1989). MA: Sinauer. Peter (1993). Rimini. (1986). In Proceedings of the Twenty-Third Annual Conference of the Cognitive Science Society. Howson. vol. Jeffrey. R. A. (1974). Asquith and Ian Hacking (eds. Geoffrey (1978). Colin (2000). 470. I. Joshua B. Oxford: Oxford University Press. Cumberland. Paul W. Dordrecht: D. 25: pp. Kolmogorov. Cambridge. 2. 40 . Scientific Reasoning: the Bayesian Approach. MA: Harvard University Press. British Journal for the Philosophy of Science. Chicago: University of Chicago Press. Cambridge: Cambridge University Press. vol. Humphreys. “On Tables of Random Numbers”. and Uspensky. A. pp. V. Hughes. vol. 389–412. Kolmogorov. Ghirardi. and Tenenbaum. “Is “Physical Randomness” Just Indeterminism in Disguise?” In Peter D. Asquith and Ian Hacking (eds. (2001). Daniel L. 32: pp. “Randomness and Coincidences: Reconciling Intuition and Probability Theory”. Chicago: University of Chicago Press. Jr..

“Deterministic Theories”. — — (1986). Ming and Vitanyi. “The Definition of a Random Sequence”. Ridolfi. 24: pp. Kino (ed.. Isham. F. Proceedings of the Royal Society of London. II. Paul M. 3789–3805. 12–37. New York: Dover. 2. vol. “Putnam’s Paradox”. A. and Cox. Oxford: Oxford University Press. 3–10.). The Open Universe. Error and the Growth of Experimental Knowledge. “The Literature on von Mises’ Kollektivs Revisited”. 602–619. “A Subjectivist’s Guide to Objective Chance”. 221–36. 35: pp. 36: pp. 41 . vol. NJ: Rowman and Littlefield. Rodriguez-Iturbe.. — — (1969). Theoria. Totowa. Martin-Löf. vol. D. 62: —— pp. 83–132. L. Mayo. Philosophical Papers. 9: pp. Ignacio (2000). R. Probabilistic Soil Moisture Dynamics”. Per (1966). Ridolfi. “On the Notion of Randomness”.. Advances in Water Resources. In A. vol. “Probabilistic Modelling of Water Balance at a Point: the Role of Climate. Soil and Vegetation”. Intuitionism —— and Proof Theory. Formal Philosophy. Berlin and New York: Springer Verlag. 2 ed. In Richmond H. vol. Porporato. vol. 707–23. Information and Control. In Lewis (1986). —— vol. —— Li. L.Laio. David (1980). Amsterdam: North-Holland. New Haven: Yale University Press. (1997). Australasian Journal of Philosophy. (1999). Water Resources Research. A. “Ecohydrology”. — — (1984). Rodriguez-Iturbe. B. Richard (1974). “Plants in Water-Controlled Ecosystems: Active Role in Hydrological Processes and Response to Water Stress. Karl (1982). Montague. Pierre-Simon (1951). Thomason (ed. Chicago: University of Chicago Press. An Introduction to Kolmogorov Complexity and Its Applications. — — (1970). and Rodriguez-Iturbe. Popper. Lewis. Deborah (1996). pp. V. Ignacio. Philosophical Essay on Probabilities. Laplace. 455: pp.. Porporato. Ignacio (2001).).

Stalnaker. Strevens. 26: pp. (1961). “Chaos. The Scientist Speculates.gz. In I. —— van Lambalgen.Schurz. “Kinds of Unpredictability in Deterministic Systems”. Tech. Statistics and Truth. 123–41. Urbana. 284–302. IL: University of Illinois Press. Law and Prediction in the Light of Chaos Research. Schurz (eds. MA: MIT Press. J. William (1949). Michael (2003).). Eugene P. Inquiry. van Fraassen. — — (1995). Good (ed. Explaining Chaos. A. Evolution of the Social Contract. Wigner. (1939). Ville. Lawrence (1993). pp. 725–55. Rep. (1980). Oxford: Oxford University Press. Journal of Symbolic Logic. ILLC. Probabilistic Metaphysics. Skyrms. Smith. Oxford: Oxford University Press. pp. Sklar. Berlin: Springer. Prediction and Laplacean Determinism”. Robert C.ps.text. Cambridge: Cambridge University Press. Bas C. Weingartner and G.uva. MA: Harvard University Press. Gerhard (1995). Cambridge. (1984). “Von Mises’ Definition of Random Sequences Revisited”. Shannon. Probability. ML-1995-01. 52: pp. (1989). Brian (1996). Laws and Symmetry. Physics and Chance. URL www. Mathematical Theory of Communication. Michiel (1987). 123–31. von Mises. J. Uni—— versity of Amsterdam. Cambridge. The Scientific Image. Étude Critique de la Notion Collectif . — — (1989). Suppes. and Weaver. 42 .illc. Peter (1998). Cambridge: Cambridge University Press. Oxford: Blackwell. Paris: Gauthier-Villars.). Patrick (1984). In P. “Remarks on the Mind-Body Question”. vol. M.nl/Publications/ResearchReports/ ML-1995-01. vol. New York: Basic Books. American Philosophical Quarterly. Cambridge: Cambridge University Press. Claude E. Bigger than Chaos: Understanding Complexity through Probability. Richard (1957). New York: Dover. Stone. “Randomness and Infinity”.

Sign up to vote on this title
UsefulNot useful