You are on page 1of 22

Biology and Philosophy 17: 613–634, 2002.

 2003 Kluwer Academic Publishers. Printed in the Netherlands.

Rationality, biology and optimality*

CAROLYN PRICE
Department of Philosophy, Faculty of Arts, The Open University, Walton Hall, Milton Keynes MK7 6 AA,
UK (e-mail: c.s.price@ open.ac.uk; phone: (44) 01908 659214)

Received 22 November 2001; accepted in revised form 27 March 2002

Key words: Biology, History, Learning, Normativity, Optimality, Rationality

Abstract. A historical theory of rational norms claims that, if we are supposed to think rationally, this is
because it is biologically normal for us to do so. The historical theorist is committed to the view that we
are supposed to think rationally only if, in the past, adult humans sometimes thought rationally. I consider
whether there is any plausible model of rational norms that can be adopted by the historical theorist that is
compatible with the claim that adult human beings are subject to rational norms, given certain plausible
empirical assumptions about our history and capabilities. I suggest that there is one such model: this
model centres on the idea that a procedure is rational if it has been endorsed (or at least not rejected) by
mechanisms that have the function to ensure that the subject learns to reason in a way that approaches a
certain kind of optimality.

Introduction

Whenever I form a belief that runs counter to the evidence, or entertain obviously
inconsistent beliefs, I fail to reason as I am supposed to: rationality is a norm that
governs reasoning. This raises a question: what is the origin or source of rational
norms? The issue is likely to be particularly puzzling to those who, like myself, view
the mind as a natural phenomenon: norms do not fit neatly within the natural world.
Naturalists are under pressure to explain rational norms away, or to show that they
arise from non-normative facts of some kind.
In this paper, I would like to investigate one strategy that a naturalist might adopt
in order to accommodate rational norms: this is, to treat the norms of rationality as
biological norms. This strategy has been used elsewhere in the Philosophy of Mind:
a number of writers have argued in favour of the view that the norms that govern
intentional phenomena are biological norms. Essential to this strategy is the claim
that it is possible to naturalise biological norms, by treating them as arising from
non-normative facts about the capacities or the history of biological systems.1 If
biological norms can indeed be naturalised in this way, and if the norms of
intentionality and rationality can be regarded as biological norms, then this will be
an important step forward in developing a naturalistic account of the mind.
It is not my aim in this paper to provide a defence of this project, or to develop a
biological theory of rational norms in any detail. Rather, my intention is to sketch

1
See Millikan (1984a, 1989), Price (2001).
614

what I take to be a plausible version of the account, and to investigate its


implications with respect to two further issues: the scope or applicability of rational
norms, and their nature. Although questions about the source, scope and nature of
rational norms are distinct, they cannot necessarily be resolved in isolation from
each other. Given certain empirical assumptions, a theorist’s position on two points
may constrain her position on the third. Indeed, it might be suggested that a theorist
who adopts a biological theory of the source of rational norms will face an
unpleasant dilemma. Given that certain empirical assumptions hold, she must
choose between an implausibly weak account of the nature of rational norms and an
implausibly narrow account of their scope. It is this suggestion that I would like to
explore.
The norms of rationality govern a number of different kinds of reasoning,
including reasoning about what to believe and reasoning about how to act. For
simplicity’s sake, I shall consider only norms that govern the formation and
retention of beliefs. Moreover, I shall begin with only the most anodyne assump-
tions about what the norms of theoretical rationality require: for example, I shall
assume that a theoretically rational subject is required to form beliefs in a way that is
sensitive to evidence, to resolve inconsistencies between his beliefs, and to be aware
of some of their immediate implications. But I shall not begin with assumptions
about how well-grounded or how consistent his beliefs will be; nor shall I make
assumptions about how a rational subject will ensure that these standards are met.2
I shall begin by introducing the theory that I would like to investigate, and by
considering what it implies about the scope and nature of rational norms (section 2).
We will then be in a good position to see why it might be thought that, given certain
plausible empirical assumptions, the theory runs into trouble (section 3). I shall end
by suggesting a possible solution (section 4).

The source, scope and nature of rational norms

The source of rational norms: the historical theory

There is a great deal of literature dedicated to the attempt to develop a biological


account of intentional content.3 Millikan (1984b) has suggested that we might adopt
a theory of knowledge along similar lines. A biological theory of the source of
rational norms would fit neatly within this project. Such a theory would possess a
number of features that should make it attractive to those who favour a naturalistic

2
Throughout the paper, I shall treat the norms of rationality as if they applied to subjects, rather than
to reasoning techniques or beliefs. This is not because I think that it is inappropriate to describe a
technique or a belief as rational or irrational; but I think claims about the rationality or otherwise of
techniques and beliefs derive from claims about the reasoning of particular subjects in particular
situations.
3
For some examples, see Millikan (1984a), Papineau (1993), Dretske (1988, 1995), Neander (1995),
Price (2001).
615

approach to the mind. First, it seeks to provide an account that is metaphysically


modest. In other words, it avoids the need to posit independent rational norms.
Secondly, it seeks to make the normativity of rationality unmysterious, by treating
rational norms as grounded in causal facts of an already familiar kind. Finally, it
allows us to treat rational norms as objective, in the sense that the appropriateness of
talk about rational norms is not contingent on what we happen to value.4
I will refer to the theory that I would like to investigate as the historical theory.
The historical theory centres on the notion of normality proposed by Ruth Millikan.
According to Millikan, a device will be operating normally provided that the
following two conditions are met: (1) devices of the same type have operated in that
way in the past; (2) the fact that they did so helps to explain the presence of the
device. For example, it is normal for my heart to beat at a certain speed when I am at
rest. This is because my ancestors’ hearts beat at that speed when they were at rest;
and the fact that they did so helped them to survive and to reproduce, thereby
helping to ensure that my heart is here today (Millikan (1984a): 33–34).
At first glance, it might seem that there is no room to apply this notion of
normality to capacities or forms of behaviour that organisms develop through
learning. For example, suppose that a chimpanzee learns to wash fruit, and that this
is a novel form of behaviour that no chimpanzee has produced before. It might seem
that this behaviour cannot be considered normal, in the sense defined, because
chimpanzees have not behaved in this way or benefited from this kind of behaviour
in the past. However, Millikan’s full account provides us with the resources to deal
with this kind of case. It allows that learned behaviour will count as normal if it has
been generated in a normal way (Millikan (1984a): 41–42).
Suppose that the chimpanzee’s capacity to learn is sustained by learning mecha-
nisms of some kind. And suppose that the operation of mechanisms of the same kind
benefited earlier chimpanzees by enabling them to develop novel, biologically
useful forms of behaviour. Finally, suppose that they normally perform that function
in a certain way. On Millikan’s account, we can treat the chimpanzee’s fruit-
washing behaviour as normal, provided that it has been generated by these learning
mechanisms, operating in their normal way. The normality of the chimpanzee’s
behaviour is derived from the normality of the learning process that generated it.
The significance of this point will emerge later on. For the time being, we need
only the idea that the workings of biological mechanisms are governed by certain
standards of normality, standards that are determined by the history of those
mechanisms.
Our capacity to reason is, I take it, a biological capacity, in that it is underpinned
by the operation of certain biological systems. These systems normally operate in a
certain way, and so, by extension, we can be described as reasoning in a biologically
normal or abnormal way. Moreover, we might suppose that for adult humans,
normal reasoning coincides with rational reasoning. In other words, we might
suppose that our ancestors survived and reproduced partly because they sometimes

4
For a more precise statement of my views on this, see (Price (2001): 42–47).
616

reasoned in a rational way. The historical theorist takes one further step: according
to the historical theorist, to say that the norms of rationality apply to us just is to say
that it is biologically normal for us to reason in a rational way.

The scope of rational norms: ‘ ought’ implies ‘ did’

Adult humans are supposed to form beliefs in a rational way: an adult human who
engages in wishful thinking or refuses to accept an obvious implication of one of his
beliefs is failing to reason as he should. In contrast, a toddler who makes an error of
one of these kinds is not failing to reason as he should: the norms of rationality do
not apply to toddlers. The same applies to non-human animals. It would be
inappropriate to accuse a monkey of irrationality, even if we had reason to convict it
of wishful thinking or inconsistency.5
This, at least, is the standard view.6 A proponent of this view is committed to a
certain kind of optimism about the scope of rational norms, in that she holds that
rational norms do apply to adult humans. Optimism about the scope of rational
norms can be contrasted with the pessimistic claim that the norms of rationality do
not in fact apply to adult humans, any more than they apply to toddlers and
non-human animals. Pessimism is not, I take it, an obviously false or incoherent
view,7 but it is not an attractive position – we would need strong theoretical or
empirical reasons to adopt it.
How might an optimist explain the limited scope of rational norms? A common
response is to appeal to the principle that ‘ ought’ implies ‘ can’. However, it is not
immediately clear what the consequences of adopting this principle will be: there are
a number of different ways in which it is possible to interpret the principle,
depending on what is meant by ‘can’. (Stich (1990):156) suggests that we should
understand ‘can’ as ‘can learn to’. On this view, rational norms apply selectively to
adult humans because only adult humans are capable of learning to satisfy them.
The historical theory implies a rather different solution to the selectivity problem.
According to the historical theory, rational norms apply to human adults because our
ancestors, as adults, sometimes helped to ensure their own survival and the survival
of their offspring by forming beliefs in a rational way. This was not true of our
ancestors as young children; nor was it true of the ancestors of other animals. For
this explanation to be true, it must have been the case that our ancestors, as adults,
did sometimes form beliefs in a rational way, and that they benefited by doing so.
The solution offered by the historical theorist turns, not on the principle that ‘ought’
implies ‘can’, but on the principle that ‘ought’ implies ‘did’.

5
Here it is easier to think of examples involving what, for an adult human, would be a failure of
practical rationality.
6
I suspect that what I have termed ‘the standard view’ is overstated, in that the distinction between
adult humans and other cognisers is a matter of degree rather than kind. If so, the problem of selectivity
should be rephrased as a problem about why we apply different standards to adult humans, toddlers and
monkeys.
7
Contrast Cohen (1981).
617

This feature of the historical theory has important implications for what the
historical theorist is able to say about the nature and scope of rational norms. If the
historical theorist wishes to maintain an optimistic view of the scope of rational
norms, she must adopt a model of rationality that treats rationality as something that
our ancestors actually achieved, and that helped them to survive and to reproduce.

The nature of rational norms: ideal or ecological?

A number of writers have suggested that there is a constitutive connection between


rationality and optimality.8 This implies that a subject meets the standards of
rationality when he forms beliefs in the best possible way. As it stands, however,
this is somewhat vague. A procedure is optimal only relative to a goal and to a set of
constraints; we need to specify what they are.
One suggestion is enshrined in what is known as the ideal model of rationality.
According to this model, the goal of theoretical rationality is to ensure that the
subject forms true beliefs and avoids false ones. An ideally optimal procedure will
be one that is best suited to satisfy this goal in ideal circumstances: that is, in the
absence of contingent constraints, such as limited time or resources. A ideally
rational subject will form his beliefs in accordance with all the relevant evidence; he
will draw out all the implications of his beliefs; he will ensure that all his beliefs are
mutually consistent. These requirements reflect very general features of reality: for
example, the requirement that our beliefs be consistent reflects the fact that reality is
consistent. It is because these requirements reflect general features of reality that a
subject who meets them is perfectly placed to form true beliefs.
As has often been pointed out, the ideal model cannot be regarded as providing
standards that adult humans are actually able to meet.9 This is because humans are
finite beings. As a result, it is simply not feasible for an adult human to gather all the
evidence that might be relevant to a particular problem, or to become aware of every
implication of his beliefs, or to check all his beliefs for inconsistency. These are
tasks that would require infinite informational and computational resources. If we
were to try to reason in an ideal way, we would never actually succeed in making a
judgement. We would always be searching for evidence that might affect our
decision and checking to ensure that it was consistent with our other beliefs.
For these reasons, the historical theorist cannot combine the ideal model of
rational norms with optimism about the scope of rational norms. Our ancestors were
finite beings, and so could not have reasoned in accordance with the standards of
ideal rationality. Moreover, they could hardly have ensured their own survival by
reasoning in way that precluded them from ever making a judgement.
Recently, a number of writers have proposed that the ideal model should be
replaced by a model of rationality that takes account of the limitations and needs of

8
See especially (Rescher (1988):1–6); (Stich (1990): 151–56); (Nozick (1993): 36); (Oaksford and
Chater (1994): 608).
9
See especially (Simon (1972): 165–66); (Cherniak (1986): 7–27); (Gigerenzer and Todd (1999):
8–10).
618

actual thinkers. I will refer to a model of this kind as an ecological model of


rationality. The details of these proposals differ: the version that I will consider here
is inspired by a number of writers.10 On this version of the model, a procedure will
count as ecologically optimal if there is no identifiably better procedure that the
subject could use, given the following two constraints: (1) the subject has restricted
computational capacities and limited access to information; (2) the subject needs to
form and to retain true beliefs in a way that will help him to satisfy his needs as a
biological organism – for example, he needs to make judgements quickly enough to
be useful in guiding his behaviour. In what follows, I will refer to such a procedure
as an optimal* procedure.
There is one element in this definition of optimality* that requires comment. As
(Gigerenzer and Todd (1999):10–12) point out, there is no guarantee that a subject
with limited computational and informational resources will be able to ascertain
which is the best procedure to use in any given situation. To accommodate this
point, I have included in my definition of optimality* the requirement that there is
no identifiably better procedure that the subject could use: ‘identifiably’ is to be
understood as shorthand for ‘identifiably by us, given our limited informational and
computational resources’. As a result of this proviso, optimality* must be regarded
as relative to our capacities in more than one way.
The notion of optimality* can be contrasted with a third notion of optimality,
familiar from the literature concerning the nature and utility of adaptationist
assumptions in biology.11 Here, the prevalent notion is that of optimality relative to
the goal of maximising fitness.12 The relevant constraints are controversial, but
might include the organism’s way of life, the nature and distribution of resources
and hazards in its environment, the organism’s other capacities and traits, and the
capacities and traits of conspecifics. A further possible set of constraints include the
historical, genetic and developmental factors that determine whether a certain trait
could become established in a certain population of organisms.13
The notion of optimality* differs from this adaptationist notion of optimality in
two ways. First, the notion of optimality* is relative, not to fitness, but to the goal of
ensuring that the subject forms true beliefs, and forms them quickly and efficiently.
It would certainly be plausible to assume that the optimal* exercise of this capacity
will promote fitness. However, there is no implication that the optimal* exercise of
this capacity is likely to maximise fitness. Indeed, later on in the paper, we will
encounter a good reason to question this suggestion. As a result, there is no reason to
assume that optimal* reasoning will be adaptationally optimal.
Secondly, the constraints on optimality* are not obviously biologically salient.

10
Anderson (1991); (Evans and Over (1996): 8); Oaksford and Chater (1994), Gigerenzer and Todd
(1999).
11
For example, see the papers in Dupre´ (1987) and in Orzack and Sober (2001).
12
For different ways of making this more precise, see (Lewontin (1987): 153); (Emlen (1987): 164);
(Gilchrist and Kingsolver (2001): 225). I have not offered a precise definition here, as this controversy
lies outside the scope of this paper.
13
See (Kitcher (1987): 88–89); (Lewontin (1987): 154–57); (Emlen (1987): 167).
619

They include the subject’s need for true and useful beliefs and the finite nature of his
informational and computational capacities. But there is no reference to the
subject’s other traits and capacities. Moreover, there are other considerations
relating to the subject’s cognitive capacities that are not included – for example, the
precise design of the subject’s cognitive mechanisms. Also excluded are the
historical, genetic and developmental factors that help to explain why our cognitive
capacities are as they are. It will be important to keep these points in mind in what
follows.
According to the ecological model as I have characterised it here, a subject will
count as a forming a belief in a rational way if he forms it in accordance with a
procedure that is optimal*. An optimal* procedure will be one that best combines
accuracy with efficiency. Hence, it will not be an ideal procedure: for example, a
human being who is reasoning in an optimal* way will stop checking his beliefs for
consistency at a certain point, because to go on checking would prevent him from
satisfying the needs that his cognitive system exists to serve, and so would not be an
effective procedure for him to adopt, given his needs.14
A proponent of this model needs to explain how to bridge the gap between the
limitations of rational subjects and their need for true beliefs. Central to the
ecological model is the idea that a rational being will employ certain shortcuts or
heuristics that exploit the peculiar features of the environment that he inhabits. The
notion of bounded rationality developed by Simon (1956) enshrines the idea that
rational subjects are able to exploit the structure of their environment in order to
simplify the cognitive challenges that they face. By relying on assumptions about
certain predictable features of the environment, the rational subject is able to deploy
cognitive procedures that are both accurate and efficient, in that they make the best
use of the computational and informational resources available to him.15 I will refer
to these techniques as optimal* heuristics.
It might be asked why I do not use another, more familiar term - the term ‘fast and
frugal heuristics’ coined by (Gigerenzer and Todd (1999):14–15). The reason for
this is that Gigerenzer and Todd favour a version of the ecological model of
rationality which does not require that the techniques employed by a rational subject
should be optimal*, but only that they should be highly accurate and efficient. The
notion of an optimal* heuristic is rather stronger than the notion of a ‘fast and
frugal’ one.16
The possibility that an ecologically rational subject might employ heuristics that
exploit the structure of his environment opens up a further gap between rationality

14
This need not be taken to imply that he will need to calculate when he should stop: he may simply
operate in accordance with some generally reliable stopping rule.
15
Relevant features of the subject’s environment may include the cognitive procedures of other
subjects. For example, the value of testimony depends crucially on this. This is a factor that is highly
likely to change over time.
16
I would prefer a model that centres on optimality* rather than excellence, because it allows us to
rule out the possibility that a subject might count as proceeding in a rational way if he is using a certain
technique (albeit a highly accurate and efficient one) even though he knows that there is an identifiably
better procedure that he could use.
620

as it is characterized by the ideal model and rationality as it is characterized by the


ecological model. On the ecological model, the norms of rationality may reflect
quite specific assumptions about the environment that we inhabit, assumptions that
might not be true elsewhere. As a result, the norms of rationality cannot be seen as
absolute: it may be that what is optimal* for human beings living in our environment
is not optimal* for members of another intelligent species, or for humans who
inhabit a very different environment.17
The ecological model, at first glance, provides an attractive picture of rational
norms. Unlike the ideal model, it appears to characterise rational norms in a way that
puts them within our reach. Nevertheless, it sets the standards high: there is no risk
that a subject who is operating in an optimal* way might be using techniques that
are inherently incoherent or unreliable, at least within his current environment.
Moreover, it might seem that the ecological approach fits very well with a
historical account of rational norms. It might well be suggested that our ancestors’
success in forming true and useful beliefs is explained by the fact that they
sometimes reasoned in an optimal* way. If so, it will be normal for us to reason in
an optimal* way. Hence, the historical theorist will be able to adopt the ecological
model of rationality without any pressure to abandon optimism concerning the scope
of rational norms. Unfortunately, this assessment faces two objections: the objection
from flawed design and the objection from useful falsehoods.

The historical theorist’s dilemma

The objection from flawed design

The first objection might be introduced as follows. The historical theorist has no
warrant to assume that our ancestors ever reasoned in an optimal* way.18 Our
ancestors might have survived and prospered even though the reasoning techniques
that they employed were far from optimal*: what mattered was that their techniques
were good enough to promote their survival. Provided that they did so, it will be
biologically normal for us to use similar techniques, no matter how flawed they
might be. The historical theorist, then, cannot assume that we are supposed to use
optimal* reasoning techniques. Hence, she faces a dilemma: either she must give up
the ecological model of rationality, or she must accept that she has no good reason to
assume that adult humans are subject to rational norms.
There are two ways in which the historical theorist might attempt to counter this
objection. First, she might suggest that it relies on a general misconception of the
nature of the evolutionary process. To support this suggestion, she would need to

17
But there is no suggestion that rational norms are relative to individuals.
18
This point has been made by Sober (1981); (Lycan (1981): 345); (Stich (1990): 63–7, 96–7); and
(Stein (1996): 186–205). Contrast (Dennett (1987): 96)
621

adopt some version of adaptationism, construed as making a predictive claim.19 The


claim will be (roughly) that, where the traits and capacities of an organism have
evolved over a long period of time, it is reasonable to expect them to be optimal,
relative to fitness. Of course, as we have seen, the notion of optimality at issue here
is not identical to the notion of optimality*. But the historical theorist might appeal
to this adaptationist thesis to support the suggestion that, since the design of our
cognitive mechanisms has been subject to evolutionary pressure over many millen-
nia, it is reasonable to expect them to function in an optimal* way.20
However, adaptationism, construed in this way, is highly controversial. Its
plausibility depends to a large extent on how the notion of adaptational optimality is
understood. On a strong reading, the thesis is understood as the claim that it is
reasonable to expect the capacities of evolved organisms to be optimal, given the
organism’s way of life, its other capacities, and its environment. On this reading, the
thesis appears implausible, in that it ignores the historical, genetic and developmen-
tal factors that determine whether it is possible for a particular trait to become
established in a population. On a more modest reading, the thesis might be
understood as the claim that it is reasonable to expect the capacities of evolved
organisms to be optimal relative to a broader range of constraints – including
historical, genetic and developmental factors.21 However, this more modest thesis
will not help the historical theorist: for, as we have seen, the notion of optimality*
does not incorporate constraints of these kinds. It seems, then, that there is no
plausible version of the adaptationist thesis that will help the historical theorist to
answer the objection.
Alternatively, the historical theorist might produce some specific reason to
suppose that our ancestors generally reasoned in an optimal* way. In particular, she
might argue that modern humans generally reason in an optimal* way, and that it is
therefore reasonable to assume that our ancestors did so too. Unfortunately,
however, the first premise of this response is debatable: philosophers working on
issues connected with human rationality are now familiar with the claim that adult
humans do not generally form beliefs in a optimal* way, or at least that they do not
do so as consistently as our pre-scientific intuitions suggest. There are two sources
of this scepticism about our cognitive capacities.
First, there is a body of experimental evidence that has been taken to suggest that
adult humans make predictable errors in performing certain reasoning tasks. In some
cases, these errors are presented as the result of simple flaws in our reasoning

19
Many adaptationists prefer to view adaptationism as a methodological tool rather than a predictive
assumption. For discussion, see Dupre´ (1987): 22–23; Kitcher (1987): 80; Emlen (1987): 163–64;
Abrams (2001): 273–74.
20
For the importance of the length of the evolutionary process, see (Smith (1987): 222); (Eshel and
Feldman (2001): 183).
21
This version of the thesis is modest but not trivial: its proponent is committed to the claim that
chance does not play a major role in determining the traits of evolved organisms (Kitcher (1987): 89). I
am not qualified to judge the plausibility of this claim.
622

competence: human beings, it is suggested, are not good intuitive reasoners when it
comes to certain kinds of problem. For example, adult humans are all too ready to
accept the Gambler’s Fallacy; they tend to underestimate the importance of base
rates when faced with certain kinds of statistical problem (Tversky and Kahneman
1973); and, on some interpretations, they make flawed choices on certain versions of
the Wason card selection test (Wason 1968).22
In other cases, apparent errors have been ascribed to the operation of heuristics
that, far from being optimal*, are better characterised as ‘quick and dirty’. The
subjects are said to use techniques that rely on rules that are in some way inadequate
to the task, or on inaccurate assumptions about the environment. ‘Quick and dirty’
heuristics may work well enough much of the time, but they will break down under
some circumstances, leading the subject into inconsistency. A standard example is
Tversky and Kahneman’s suggestion that in making judgements about probability,
human subjects employ a heuristic of representativeness. This can lead to error,
because the logic of representativeness differs from the logic of probability
(Tversky and Kahneman 1983).23 ‘Quick and dirty’ heuristics are not optimal*,
because they are not completely reliable, even within the subject’s usual
environment.24 The irrationality experiments, then, can be taken to suggest that the
way in which adult humans reason is not optimal* in every respect.
Secondly, there is a rather more speculative argument that rests on certain
plausible assumptions about our evolutionary history. It is argued that, once
historical constraints are taken into account, it appears highly unlikely that our
cognitive mechanisms are optimally* designed. Clark (1987) has presented a
detailed and persuasive version of this argument. He points out that it is reasonable
to assume that the cognitive mechanisms that underwrite our capacity to engage in
reasoning have evolved from much simpler mechanisms. These simpler mecha-
nisms will have functioned to support rather more restricted cognitive abilities.
Moreover, it is likely that some of them evolved to perform specific functions, and
have since been recruited to carry out quite different tasks. If this is correct, Clark
suggests, it is reasonable to expect the design of our cognitive mechanisms to
include some flaws or kludges. Clark’s argument might be put together with the
results of the irrationality experiments to build a case for the claim that it is highly
likely that the design of our cognitive systems is not optimal* in every respect.25

22
Oaksford and Chater (1994) explain these apparent errors as arising from the operation of an
ecologically optimal heuristic.
23
It is controversial how far human reasoning should be characterised as relying on ‘quick and dirty’
heuristics and how far it should be seen as exploiting highly adaptive, reliable shortcuts. There would
appear to be evidence for both. For discussion, see Kahneman and Tversky (1996), Gigerenzer (1996).
24
It is worth bearing in mind that, while the distinction between ‘quick and dirty’ heuristics and ‘fast
and frugal’ heuristics is a matter of degree, the distinction between ‘quick and dirty’ heuristics and
optimal* heuristics is not a matter of degree: a heuristic is either optimal* or it is not.
25
Recall that optimality* is relative only to cognitive limitations that arise from our status as finite
beings, not from flaws in our cognitive mechanisms.
623

We can now restate the difficulty that faces the historical theorist. The historical
theorist holds that, if we are supposed to think in a rational way, some of our
ancestors must actually have succeeded in doing so. But if it is plausible that the
design of our cognitive mechanisms is flawed, it follows that it is plausible that our
ancestors did not reason in an optimal* way in all respects. If so, the historical
theorist must accept that it is plausible that adult human beings are not supposed to
form beliefs in an optimal* way in all respects. It follows that the historical theorist
cannot combine an ecological model of the nature of rational norms with unadulter-
ated optimism about their scope.
Could the historical theorist plausibly reject the evidence of the irrationality
experiments? The proper interpretation of these experiments is still controversial.
But this is not to say that she could comfortably choose to dismiss the results of a
range of studies that, while disputed, are far from discredited: this would require an
act of faith that is unlikely to carry conviction. She appears to have two options: to
find a workable alternative to the ecological model, or to accept that there is a
substantial possibility that we should be pessimists about the scope of rational
norms.

The problem of useful falsehood

According to the ecological model, the goal of reasoning is to enable the subject to
form true beliefs about his environment and to do so quickly and efficiently. As
(Stich (1990): 96–7) points out, however, truth and biological utility may come
apart. For example, a subject who holds over-optimistic beliefs about his status
among his peers will be more likely to assert himself, and so more likely to improve
his standing. Moreover, if certain forms of wishful thinking are biologically useful
to us, it seems reasonable to assume that they benefited our ancestors too. This raises
a second worrying possibility for the historical theorist. It implies that the goal of
biologically normal reasoning is not the efficient formation of true beliefs, but the
formation of beliefs that are simply useful. Once again, it seems that it is not
biologically normal for us to reason in an optimal* way.
As it stands, this is a little too quick. For the objection to hold, it would be
necessary to establish not only that our ancestors did sometimes benefit by engaging
in wishful thinking, but that their tendency to do so derives from some heritable
trait. Moreover, it needs to be shown that possessing this trait did not merely benefit
them on odd occasions, but increased their chances of survival overall. Hence,
although there does appear to be evidence that adult humans who form excessively
optimistic beliefs on certain subjects do benefit by doing so (Taylor 1989), this does
not amount to evidence that it is normal for them to form beliefs in this way.
This objection, then, is not as well-supported as the objection from flawed design.
Still, it might be thought that it is plausible enough to worry the historical theorist. If
so, it adds further weight to the claim that she must choose between the ecological
model of rationality and the confident assertion that adult humans are supposed to
think in a rational way.
624

Between a rock and a hard place

The historical theorist might attempt to preserve optimism about the scope of
rational norms by dropping the ecological model in favour of a more biologically
realistic model of rationality. I will refer to a model of this kind as a biological
model.
What form would the biological model need to take in order to answer the worries
raised in the last two sections? An obvious thought is that the model should use a
notion of optimality that takes account of the actual design of the subject’s cognitive
mechanisms: on this view the subject will be reasoning in a rational way if he is
reasoning as reliably and efficiently as he can, given the cognitive tools at his
disposal. The historical theorist might suggest that it is plausible to suppose that it is
normal for us to reason in this way: it is reasonable to assume that our ancestors
sometimes reasoned as reliably and as efficiently as their cognitive capacities would
allow, and that the fact that they did so helps to explain their survival.
This version of the biological model avoids the objection from flawed design. On
this model, a subject who employs a ‘quick and dirty’ heuristic will count as
forming beliefs in a rational way, provided that it is the most accurate technique that
he is capable of using. Unfortunately, however, this response to the objection
succeeds only by setting the standards of rationality implausibly low: a subject who
mechanically applies an unreliable procedure, even in situations in which it has
proved unreliable in the past, is not reasoning in a rational way. Of course, it might
be suggested that this kind of behaviour can sometimes be practically justified
(Cherniak (1986): 82; Dennett (1987): 96–7). For example, we may be practically
justified in using a ‘quick and dirty’ procedure when nothing better is available. But
this is not enough to show that the subject is proceeding in a theoretically rational
way. It may sometimes be prudent to guess; but a belief that has been formed by
guesswork has not been formed in a theoretically rational manner.
A second worry for this version of the biological model of rationality is that, like
the ecological model, it is open to the objection from useful falsehood. This is
because it retains the assumption that the goal of rationality is the formation of true
and usable beliefs. To avoid this second objection, the historical theorist would need
to adopt an even weaker version of the biological model – one that takes the goal of
rationality to be the formation of beliefs that will maximise fitness. On this weaker
model, a procedure will count as optimal if there is no procedure that would better
enable the subject to form biologically useful beliefs, given the design of his
cognitive mechanisms. This notion of optimality comes closer to the notion of
adaptational optimality described earlier.26
This weaker version of the biological model is consistent with the claim that a
rational subject may engage in biologically useful forms of wishful thinking. Hence

26
The two notions are not precisely equivalent, if only because the adaptationist notion is generally
taken to apply to the organism’s traits and capacities. The biological model allows that our cognitive
capacities are not adaptationally optimal, and, instead, applies the notion of optimality to the way in
which the subject exercises the capacities that he has, given the tools at his disposal.
625

it would allow the historical theorist to avoid the objection from useful falsehood.
But this second version of the biological model is even less plausible than the first.
The model jettisons any direct connection between theoretical rationality and truth:
on this model, the rational subject will aim at truth only in contexts in which truth
maximises fitness. It would be difficult to insist that this result is more acceptable
than the failure of the historical theory.
The biological model, then, is no replacement for the ecological model. A better
solution for the historical theorist might be to retain the ecological model and to take
a pessimistic view of the scope of rational norms. I suggested earlier that pessimism
is an unattractive position, one that we should not adopt without good reason.
Nevertheless, the historical theorist might argue that we do have good reasons to
adopt it: if adult humans typically reason as badly as the irrationality experiments
suggest, then we really have no grounds on which to differentiate adult humans from
toddlers and monkeys. The most that we can say is that adult humans are supposed
to reason as usefully as they can, or adequately; but this is not to say that they are
supposed to reason in a rational way. However, there is a problem with this
suggestion. The problem is that the historical theorist appears to be committed to
pessimism even if it turns out that adult humans are capable of improving the way in
which they reason through learning.
The irrationality experiments suggest that adult humans typically reason in a
defective way with respect to certain problems. But they do not establish that adult
humans are incapable of learning to deploy better techniques. Even if the design of
our cognitive systems is flawed, there is still the possibility that we possess
mechanisms that allow us to override or side-step the flaws in our design by
enabling us to learn from our mistakes.27
To see the importance of this, it is helpful to contrast the historical theorist with a
theorist whose solution to the selectivity problem relies on the principle that ‘ ought’
implies ‘ can learn to’. It would be open to a theorist who accepts this principle to
combine an ecological model of the nature of rational norms with optimism about
their scope. She can do so provided that she assumes that adult humans are capable,
at least in principle, of learning to deploy optimal* procedures. She does not need to
insist that any adult human has actually learnt to reason in an optimal* way; it is the
mere ‘in principle’ possibility that counts.
In contrast, the historical theorist faces a serious difficulty in appealing to learning
to justify optimism about the scope of rational norms. The historical theorist can
insist that we are supposed to reason in an optimal* fashion only if our ancestors
did, at least sometimes, succeed in doing so. Even if our ancestors were capable of
improving their reasoning through learning, there is no obvious warrant for the
assumption that they ever learned to reason in an optimal* way in all respects. Our
ancestors might have benefited from the operation of these mechanisms, even if it

27
This point is made by (Lycan (1981): 345); and by (Nozick (1993): 113). The extent to which the
psychological evidence supports the idea that we can learn better reasoning techniques is unclear: for
discussions, see Wason (1968); (Manktelow and Over (1987): 214); (Tversky and Kahneman (1983):
300–02); Cheng et al. (1986), Fong et al. (1986).
626

enabled them to learn to reason in an optimal* way in certain respects but not others;
or if it provided them with more reliable procedures that fell short of optimality*. If
so, we cannot claim that it is biologically normal for us to reason in an optimal* way
in all respects, even if we can, in principle, learn to do so. It follows that the
historical theorist who opts for pessimism about the scope of rational norms is
committed to maintaining her pessimism, even if it turns out that human beings can
learn to reason in an optimal* way. This degree of pessimism seems forced.28 In this
respect, the theorist who relies on the principle that ‘ ought’ implies ‘ can learn to’ is
in a stronger position than the historical theorist.
The historical theorist, it seems, is in an uncomfortable position. She can retain
optimism about the scope of rational norms by adopting some version of the
biological model of their nature. But the biological model is indefensible. She can
retain the ecological model of rationality, but only at the price of accepting an
implausibly pessimistic view of the scope of rational norms. This view is implaus-
ible because it requires her to allow that adult humans may not be subject to rational
norms, even if they are capable of learning to reason in an optimal* fashion.
However, there is one further model of rational norms that the historical model
might consider. I will discuss this in the next section.

A way out?

Rationality and learning

In the last section, I argued that the historical theorist might be forced to maintain
pessimism, even in the face of evidence that human beings are capable, in principle,
of learning to reason in an optimal* way. This argument might appear strained:
surely, it will be suggested, if we are capable of learning better reasoning tech-
niques, it is reasonable to suppose that our ancestors exercised this capacity and
benefited by doing so; but, if so, the historical theorist ought to maintain that it is
normal for us to do so too. This response is correct; but as I will show, in order to
make use of it, the historical theorist must abandon the ecological model, and adopt
a rather different model of rationality.
The ideal, ecological and biological models of rationality have one feature in
common: they all imply that rationality is a matter of applying a certain kind of
procedure in a given situation. There is no implication that the rational subject needs
to reflect on his procedures, or that he should be able to adapt them in response to
changes in his environment. The case of the biologically rational subject might
encourage us to question this approach: arguably, what makes this subject appear so
irrational is not simply that his procedures are unreliable, but that he will apply them
mechanically, even to situations in which they have proved unreliable in the past.

28
More needs to be said about why this appears strained. In particular, I do not wish to be understood
as committed to the principle that ‘can learn to’ implies ‘ought’: this would clearly be incompatible with
the historical theory. I will come back to this in note 31.
627

The ecologically rational subject, too, might seem less than rational if we suppose
that he is incapable of adapting his procedures, were his environment to change.
This suggests that rationality is not simply a matter of reasoning effectively; it
requires that we should be able to reason in a way that is sensitive to the
effectiveness of our procedures.
It would be open to the proponent of the ecological model to accommodate this
point by adding the requirement that a rational subject must be capable of reflecting
on and adapting his procedures. For the reasons given in the last section, this will not
help the historical theorist. Instead, we need to develop a model of rationality that
centres on the idea that the rationality involves learning.

The optimific model

Suppose that adult humans are able sometimes to improve the way in which they
form beliefs. By exercising this capacity, they are able to come ever closer to
reasoning in an optimal* way. There are at least two different ways in which this
capacity might be sustained. First, we might imagine that we normally possess a
separate set of learning mechanisms that function to monitor and modify the
workings of our cognitive systems. Alternatively, we might suppose that our
capacity to learn is sustained by precisely the same mechanisms that support our
capacity to reason – that these mechanisms are capable of modifying themselves.
Again, we might assume that our learning mechanisms will exploit our most
sophisticated cognitive capacities: for example, our capacities for reflection, social
communication, and theory-building.29 But there may be other, less sophisticated
strategies. For this reason, I will avoid the assumption that a subject who is capable
of this kind of learning can be described as having reasons for adopting certain
procedures. I will assume only that our learning mechanisms are sensitive to certain
kinds of information that would justify the adoption of one reasoning procedure over
another.
The optimific model is a cousin of the ecological model, but it treats rational
norms as dynamic and as relative to the current understanding of the subject.
According to the optimific model, a subject will count as reasoning in a rational
manner if he possesses learning mechanisms of the kind described above, and if he
is using a procedure that would normally be tolerated by his learning mechanisms.
This need not imply that the subject is reasoning in a rational way only if he is using
a procedure that has been actively vetted or modified by his learning mechanisms: it
might be perfectly normal for him to use the procedures that he has innately,
provided that they are not evidently unreliable or inefficient. Conversely, the subject
can be characterised as reasoning in an irrational manner if he is using a procedure
that would normally be rejected by his learning mechanisms; or if he is using a
procedure that he has developed by an abnormal learning process - for example, a
process that has been abnormally hasty or careless.

29
Goodman (1983), Cohen (1981), Stich and Nisbett (1980)
628

On this view, a subject who commits the Gambler’s Fallacy need not be regarded
as reasoning in an irrational manner. He may simply be confused or mistaken about
the nature of probability. His thinking can be classed as irrational only if he persists
in committing the same mistake after the fallacy has been explained to him, or when
events fail to match his predictions to such an extent that it would be normal for him
to call his assumptions into question. Some types of error, however, can be regarded
as gratuitous, in the sense that they do not depend on some false assumption or
theory. In these cases, the subject is in a position to detect his mistake whenever he
makes it. Examples of this type of error include cases of self-deception or wishful
thinking - errors like these can only be made intelligible by assuming that they are
motivated. On the optimific model, a subject who commits an error of this kind will
be treated as thinking irrationally, because he should normally recognise that his
reasoning is flawed at the time.30
The optimific model suggests a rather complex picture of the relationship between
rationality, optimality* and normativity. On the one hand, as we have seen,
rationality and optimality* are not equivalent on the optimific model: a subject may
count as forming beliefs in a rational way even if the procedure that he is using is not
optimal*, provided that he has developed that procedure in the right way; converse-
ly, a subject may count as irrational even if he using an optimal* procedure, if he has
acquired that procedure by accident.
On the other hand, because optimality* is the goal of the learning mechanisms, it
functions as a standard against which the subject’s procedures may be judged by
himself or by others. If the subject is reasoning in accordance with a procedure that
is not optimal*, it may still be appropriate to criticise him – that is, to point out that
the procedure that he is using is flawed - even though it may not be appropriate to
accuse him of irrationality. However, should he become aware of the criticism, and
should he recognise that it is well-founded, it would no longer be rational for him to
proceed as before. On the optimific model, then, optimality* is not the test of
rationality, but it is a goal of the rational subject as he refines the procedures and
assumptions that govern his reasoning.
It might be objected that, like the biological model, the optimific model sets its
sights implausibly low. In particular, the optimific model allows that a subject may
be reasoning in a perfectly rational way even though he is reasoning in accordance
with a false assumption or an inappropriate set of rules. However, I suggested above
that the biological model is implausible not simply because it allows that a rational
subject might use flawed procedures, but because it does not require the rational
subject to be sensitive to evidence of previous failures. The optimific model, in
contrast, treats it as a crucial requirement on a rational subject that he should reflect
on and test his procedures. The model does not insist that a rational subject must be

30
A fully developed version of the model should include a much more detailed account of the
circumstances under which the inadequacy of a particular procedure is normally evident to the subject. I
do not have the space to explore this issue here.
629

able to supply a fully developed theoretical justification of the procedure that he is


using; nevertheless it requires that his use of the procedure is at least subject to the
kinds of consideration that might be used to provide such a justification - past
experience that suggests it is fully reliable, theoretical beliefs, endorsement by
recognised experts and so on. The optimific model, then, does not set the standards
of rationality at the same low level as the biological model.
Again, it might be objected that the optimific model implies a radically relativistic
picture of rational norms. On the optimific model, what constitutes a rational
procedure will vary from person to person, and for the same person at different
times. This would be an unfortunate consequence of the model if it were taken to
suggest that whatever the subject regards as rational is rational, or that there is no
common ground from which we are able to criticise each other’s thought processes.
But it seems clear that the form of relativism implied by the optimific model does
not have this consequence. It is clearly not the case that the optimific model implies
that any procedure favoured by a reasoning subject is a rational procedure: a
procedure will count as rational only if it has been arrived at by the normal operation
of the subject’s learning mechanisms. Hence, the model does not endorse a naıve ¨
subjectivism with respect to rational norms. Moreover, the learning mechanisms
will have the same function and will normally operate according to the same
principles in all human beings. And of course, it is open to a proponent of the
optimific model to insist that one of the ways in which it is normal for human beings
to improve their reasoning is to learn from others. If so, it will be perfectly
appropriate for us to assess the reasoning of other people who are in communication
with us, and to criticise those who ignore (without good reason) well-founded advice
about how to form true beliefs.
The historical theorist could easily adopt the optimific model of rationality. If we
do possess learning mechanisms of the kind described above, it is reasonable to
assume that our ancestors did so too, and that they benefited from their operation. If
so, our learning mechanisms will be operating normally in monitoring and modify-
ing the workings of our cognitive mechanisms; and, as a result, it will be normal for
us to reason in accordance with the procedures that they currently endorse.31 Just as
in the case of the chimpanzee who learns to wash fruit, it will be normal for us to
employ any new reasoning technique that we have learned, provided that it was
generated by our learning mechanisms in the normal way.
Does the optimific model of rationality entail the historical theory? It will
certainly follow from the optimific model that we normally reason in a rational way.
But it would be open to a proponent of the optimific model to deny that this
biological norm is equivalent to a rational norm. She might hold that rational norms
have properties (motivational properties, for example) that differentiate them from
biological norms, and which the historical theory fails to explain. If this were so, we

31
In other words, although the historical theorist does not accept the principle that ‘ can learn to’
implies ‘ ought’ she is committed to the principle that ‘ normally can learn to’ implies ‘ ought’.
630

would need some further account that would make it clear why we rationally ought
to reason in a biologically normal way. The two accounts are natural allies, but the
optimific model does not entail the historical theory.

The optimific model and the problem of flawed design

On the optimific view, the problem of flawed design gives us no immediate reason
to suppose that the norms of rationality are beyond our reach. Even if our reasoning
mechanisms are flawed, it is possible that we possess learning mechanisms that
enable us to develop new and better procedures. If we do possess such mechanisms,
it will be normal for us to notice errors and to develop new procedures; and it will
normal for us to use any (apparently) improved procedures that we have found.
Moreover, a proponent of the optimific model, unlike the proponent of the
ecological model, does not have any difficulty in accommodating the idea that the
norms of rationality are norms that our ancestors sometimes met. All that she needs
to assume is that our ancestors possessed learning mechanisms that sometimes
effected improvements in their reasoning procedures, and that they sometimes
reasoned in accordance with these improved procedures. This is a far weaker claim
that the claim that they sometimes reasoned in an optimal* way.
It follows that a historical theorist can combine the optimific model with
optimism about the scope of rational norms without having to dismiss empirical
evidence for defects in our cognitive systems. Of course, she can accept this
combination of views only if she assumes that we possess learning mechanisms of
the kind that I have described. But we have already seen that there are independent
reasons to insist that, if this is not the case, we ought not to characterise adult human
beings as subject to rational norms.
Still, it might be objected that the optimific model simply pushes the problem one
stage back. If it is reasonable to expect the design of our reasoning mechanisms to be
flawed, then surely it is reasonable to expect our learning mechanisms, if they exist,
to be flawed as well. If so, we will need to consider the possibility that improve-
ments in the subject’s reasoning procedures may come about as a result of a learning
process that is itself ‘quick and dirty’. For example, suppose that a subject comes to
recognise the importance of taking account of base rates in thinking about certain
kinds of statistical problem; but that he does so only because he has a tendency to
defer to experts. Deference to experts is a useful way of improving one’s reasoning,
and this may explain why the subject has this tendency; but it is far from reliable.
Moreover, it can be questioned whether a subject who is reasoning in accordance
with a procedure that has been adopted via a ‘quick and dirty’ process of learning
can be characterised as reasoning in a rational way, for he is not fully sensitive to the
effectiveness of the procedures that he is using. Yet the optimific model of
rationality does not seem to rule this out.
There are two different ways in which a proponent of the optimific model might
respond to this possibility. First, she might deny that there is any pressing need to
insist that our learning mechanisms should be free from flaws. What is important is
631

that the subject has adopted his procedure as the result of the operation of a
mechanism that at least has the capacity to improve his reasoning. This might be the
case even if the operation of our learning mechanisms is not itself wholly reliable.
However, I think that there is a better response that the proponent of the optimific
model might make. This is to suggest that, just as the rational subject must have the
capacity to improve the procedures that he uses in reasoning, so he must be capable
of improving the procedures that he uses in adopting new reasoning procedures. On
this view, the procedures by which the subject comes to adopt new ways of
reasoning are no more likely to be incurably flawed than the procedures he uses in
ordinary, first order reasoning. A rational subject can modify his tendency to defer to
experts in matters of reasoning in just the same way that he can modify his tendency
to defer to experts on any other matter.
If the optimific theorist makes this response, she will allow that a subject who is
reasoning in accordance with an optimal* procedure that he has adopted by a ‘quick
and dirty’ route may be reasoning in a rational way. The subject will be reasoning in
a rational way not simply because he is capable of revising his reasoning procedure,
but because he is also capable, at least in principle, of revising the process by which
he came to adopt it. In the future, he may reject his procedure, or retain it on a
different basis. For the present, he is open to criticism, but not to a charge of
irrationality, provided that he has adopted the procedure as the result of a process
that is at least not evidently sub-optimal*.
It is possible then for the historical theorist to remain optimistic about the scope of
rational norms, even if it turns out that the design of our learning mechanisms is
flawed. Still, there is one further possibility that needs to be considered: if our
learning mechanisms are flawed, it may be there are some errors in reasoning that
the subject is, in principle, unable to correct. If this were true of us, it would imply
that we are not subject to the norms of optimific rationality after all. This is because
we could no longer say that the goal of our learning mechanisms is to ensure that our
procedures are optimal*; at best, it will be to ensure that they are as close to
optimal* as they can be, given our design. If the historical theorist accepts the
optimific model of rationality, she must accept that, in this situation, it would be
appropriate to adopt a pessimistic view of the scope of rational norms. But this does
not seem to be an unreasonable position. It is one that would be shared by any
theorist who accepts the principle that ‘ ought’ implies ‘ can learn to’.
The historical theorist, then, can allow both that human beings are supposed to
think rationally and that their learning mechanisms may be flawed. She is committed
to pessimism about the scope of rational norms only if it turns out that our learning
mechanisms have flaws that could prevent us, in principle, from learning to reason
in an optimal* way.32

32
It is important to bear in mind that the notion of optimality* builds in the idea that limitations in our
informational and computational resources (as opposed to some arbitrary defect in design) may prevent
us, in principle, from identifying the best procedure to use in any given situation.
632

The optimific model and the problem of useful falsehood

Adopting the optimific model will also allow the historical theorist to resolve the
problem of useful falsehood. If our learning mechanisms function to ensure that our
procedures come ever closer to optimality*, it will be normal for us to root out any
tendency to dismiss relevant evidence, no matter how biologically useful that
tendency may be. Moreover, as we have seen, wishful thinking is a gratuitous form
of bad reasoning: wishful thinking is evidently sub-optimal*, and our learning
mechanisms should not normally tolerate it.
Once again, it might be objected that this solution simply pushes the problem one
stage back. Given that we would benefit from learning mechanisms that left
biologically useful self-deceptions in place, why should we not suppose that our
learning systems are supposed to do just that? However, to say that we would benefit
from a learning mechanism of this kind is not to say that we have one. It may simply
be too difficult to design a learning mechanism that is capable of discriminating
between biologically useful and biologically damaging procedures. At any rate, the
historical theorist might reasonably ask for evidence that we possess learning
mechanisms that are capable of making this discrimination. Moreover, as we have
seen, it is not unreasonable for the historical theorist to insist that, if adult humans
are incapable, in principle, of learning to reason in an optimal* fashion, we ought to
deny that they are subject to rational norms.

Conclusion

In this paper, I have argued that a proponent of the historical theory has good reason
to adopt an optimific model of rational norms. On this model, a subject will count as
thinking rationally so long as he is thinking in a way that seems optimal* to him,
provided that this results from the operation of a set of learning mechanisms that
meet the following three constraints: firstly the function of these mechanism must be
to ensure that the subject’s procedures come ever closer to optimality*; secondly,
their own procedures must themselves be open to scrutiny and modification; and,
thirdly, they must, in principle, be capable of enabling the subject to learn to reason
in a optimal* way. As it stands, this is merely the outline of an account; but I think
that it is an account that anyone who is sympathetic to the historical theory has
reason to investigate.

Acknowledgements

I would like to thank Kim Sterelny and an anonymous referee for this journal for
valuable comments on an earlier draft of this paper.

References

Abrams P. 2001. ‘Adaptationism: Models and Scenarios’. In: Orzack S. and Sober E. (eds), pp. 273–302.
633

Anderson J. 1991. ‘Is Human Cognition Adaptive?’. Behavioural and Brain Sciences 14: 471–485.
Cheng P., Holyoak K., Nisbett R. and Oliver L. 1986. ‘Pragmatic Versus Syntactic Approaches to
Training Deductive Reasoning’. Cognitive Psychology 18: 293–328.
Cherniak C. 1986. Minimal Rationality. MIT Press, Cambridge Mass.
Clark A. 1987. ‘The Kludge in the Machine’. Mind and Language 2: 277–300.
Cohen L.J. 1981. ‘Can Human Irrationality Be Experimentally Demonstrated?’. Behavioural and Brain
Sciences 4: 317–370.
Dennett D. 1987. ‘Making Sense of Ourselves’. In: Dennett D. (ed.), The Intentional Stance. MIT Press,
Cambridge Mass, pp. 83–101.
Dretske F. 1988. Explaining Behaviour: Reasons in a World of Causes. MIT Press, Cambridge Mass.
Dretske F. 1995. Naturalizing the Mind. MIT Press, Cambridge Mass.
Dupre´ J. (ed.) 1987. The Latest on the Best. MIT Press, Cambridge Mass.
Emlen J.M. 1987. ‘Evolutionary Ecology and the Optimality Assumption’. In: Dupre´ J. (ed.), pp.
163–177.
Eshel I. and Feldman M.W. 2001. ‘Optimality and Evolutionary Stability’. In: Orzack S. and Sober E.
(eds), pp. 161–190.
Evans J. and Over D. 1996. Rationality and Reasoning. Psychology Press, Hove, UK.
Fong G., Kranzt D. and Nisbett R. 1986. ‘The Effects of Statistical Training On Thinking About
Everyday Problems’. Cognitive Psychology 18: 253–292.
Gigerenzer G. 1996. ‘On Narrow Norms and Vague Heuristics: A Reply to Kahneman and Tversky
(1996)’. Psychological Review 103: 592–596.
Gigerenzer G. and Todd P. 1999. ‘Fast and Frugal Heuristics: The Adaptive Toolbox’. In: Gigerenzer G.
and Todd P. et al. (eds), Simple Heuristics That make Us Smart. Oxford University Press, Oxford, pp.
3–34.
Gilchrist G. and Kingsolver J.G. 2001. ‘Is Optimality over the Hill’. In: Orzack S. and Sober E. (eds), pp.
219–241.
Goodman N. 1983. Fact Fiction and Forecast. 4th edn. Harvard University Press, Cambridge Mass.
Kahneman D. and Tversky K. 1996. ‘On the Reality of Cognitive Illusions’. Psychological Review 103:
582–591.
Kitcher P. 1987. ‘Why Not the Best?’. In: Dupre´ J. (ed.), pp. 77–102.
Lewontin R. 1987. ‘The Shape of Optimality’. In: Dupre´ J. (ed.), pp. 151–159.
Lycan W. 1981. ‘‘‘Is’’ and ‘‘Ought’’ in Cognitive Science’. Behavioural and Brain Sciences 4: 344–345.
Manktelow K. and Over D. 1987. ‘Reasoning and Rationality’. Mind and Language 2: 199–219.
Millikan R. 1984a. Language, Thought and Other Biological Categories. MIT Press, Cambridge Mass.
Millikan R. 1984b. ‘Naturalist Reflections on Knowledge’. Pacific Philosophical Quarterly 65: 315–334.
Millikan R. 1989. ‘In Defence of Proper Functions’. Philosophy of Science 56: 288–302.
Neander K. 1995. ‘Misrepresenting and Malfunctioning’. Philosophical Studies 79: 109–141.
Nozick R. 1993. The Nature of Rationality. Princeton University Press, Princeton.
Oaksford M. and Chater N. 1994. ‘A Rational Analysis of the Selection Task as Optimal Data Selection’.
Psychological Review 101: 608–631.
Orzack S. and Sober E. (eds) 2001. Adaptationism and Optimality. Cambridge University Press,
Cambridge.
Papineau D. 1993. Philosophical Naturalism. MIT Press, Cambridge Mass.
Price C. 2001. Functions in Mind: A Theory of Intentional Content. Oxford University Press, Oxford.
Rescher N. 1988. Rationality. Oxford University Press, Oxford.
Simon H. 1956. ‘Rational Choice and the Structure of the Environment’. Psychological Review 63:
129–138.
Simon H. 1972. ‘Theories of Bounded Rationality’. In: Radner C. and Radner R. (eds), Decision and
Organization. North Holland Publishing Company, Amsterdam, pp. 161–172.
Smith E.A. 1987. ‘Optimization Theory in Anthropology: Applications and Critiques ’. In: Dupre´ J. (ed.),
pp. 201–249.
Sober E. 1981. ‘The Evolution of Rationality’. Synthese 46: 95–120.
Stein E. 1996. Without Good Reason: The Rationality Debate in Philosophy and Cognitive Science.
Oxford University Press, Oxford.
Stich S. 1990. The Fragmentation of Reason. MIT Press, Cambridge Mass.
634

Stich S. and Nisbett R. 1980. ‘Justification and the Psychology of Human Reasoning’. Philosophy of
Science 47: 188–202.
Taylor S.E. 1989. Positive Illusions: Creative Self-Deception and the Healthy Mind. Basic Books, New
York.
Tversky A. and Kahneman D. 1973. ‘On the Psychology of Prediction’. Psychological Review 80:
237–251.
Tversky A. and Kahneman D. 1983. ‘Extensional versus Intuitive Reasoning: The Conjunction Fallacy in
Probability Judgement’. Psychological Review 90: 293–315.
Wason P. 1968. ‘Reasoning About A Rule’. Quarterly Journal of Experimental Psychology 20: 273–281.

You might also like