You are on page 1of 27

Deliberative Coherence

Author(s): Elijah Millgram and Paul Thagard


Source: Synthese, Vol. 108, No. 1 (Jul., 1996), pp. 63-88
Published by: Springer
Stable URL: http://www.jstor.org/stable/20117531
Accessed: 17-07-2016 11:37 UTC

Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at
http://about.jstor.org/terms

JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted
digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about
JSTOR, please contact support@jstor.org.

Springer is collaborating with JSTOR to digitize, preserve and extend access to Synthese

This content downloaded from 152.2.176.242 on Sun, 17 Jul 2016 11:37:22 UTC
All use subject to http://about.jstor.org/terms
ELIJAH MILLGRAM and PAUL THAGARD

DELIBERATIVE COHERENCE*

ABSTRACT. Choosing the right plan is often choosing the more coherent plan: but what
is coherence? We argue that coherence-directed practical inference ought to be represented
computationally. To that end, we advance a theory of deliberative coherence, and describe
its implementation in a program modelled on Thagard's ECHO. We explain how the theory
can be tested and extended, and consider its bearing on instrumentalist accounts of practical
rationality.

1.

In this paper we will advance a view of the role coherence considerations


play in practical reasoning, that is, in reasoning directed toward decision
and action. The appeal to coherence in the context of practical reasoning
is a close relative of the appeal to reflective equilibrium that is familiar
from recent political philosophy (Rawls 1971); advocates of the idea that
coherence considerations have a role in the rational regulation of moti
vation include Richardson (1990), Hurley (1989) and Brink (1989). But
despite the increasing prominence of coherentist considerations in recent
moral philosophy, coherence accounts of the rational revision of motiva
tional systems have not gotten fully off the ground; the reason is that two
problems have not been dealt with. The first is that no one knows what
deliberative coherence is. The second is that proposed justifications for
deploying coherentist techniques in revising motivational systems have
not managed to win wide acceptance. The first problem should make the
second unsurprising: it is hard to give convincing justifications when one
does not have a reasonably crisp and concrete picture of what one is trying
to justify.
That no one knows what deliberative or motivational coherence is
should perhaps be expected, since coherence theories of truth and of jus
tification of belief, which have been around for much longer, are faced
with the same difficulty. BonJour, for instance, an advocate of coherence
theory, concedes that "the main work of giving ... an account [of coher
ence], and in particular one which will provide some relatively clear basis
for comparative assessments of coherence, has scarcely begun, despite the

Synthese 108: 63-88, 1996.


? 1996 Kluwer Academic Publishers. Printed in the Netherlands.

This content downloaded from 152.2.176.242 on Sun, 17 Jul 2016 11:37:22 UTC
All use subject to http://about.jstor.org/terms
64 ELIJAH MILLGRAM AND PAUL THAGARD

long history of the concept" (BonJour 1985, 93f). For a very long time,
philosophers have been using the notion of coherence without having any
thing to back it up, and, unlike the newly clothed emperor of the children's
story, have never really been called on it. Anybody who proposes to use
the concept of coherence has to do better than this: at a minimum, one
needs a way of saying what coherence is, and when one system (of beliefs,
or motivations) is more coherent than another. In Sections 4-6, we will
propose a technique for generating comparative judgments of deliberative
coherence. This technique will provide a substantive, albeit partial, account
of deliberative coherence. We will discuss how this account of coherence
can be tested and extended.
The best way to address the second problem is to narrow it. Our interest
is in coherence-driven revision of motivational systems: in techniques that
alter one's motivational system to make it more coherent. In this context,
justifying the appeal to coherence means finding occasions on which the
use of techniques that increase coherence is justified. Once we have a
substantive specification of the principles of deliberative or motivational
coherence, we can also address the demand for justification by considering
what can be said for particular principles embodied in those techniques.
Our strategy will be to address these problems for deliberative coher
ence by using as a guide what we know about reasoning aimed at producing
beliefs rather than decision (or 'theoretical reasoning'). Because the ter
ritory is muddy there too, we will have to make claims about theoretical
reasoning that are themselves controversial; fortunately, their function here
will be only heuristic, so they will not require the attention or argument that
they would have to be allotted otherwise. We will take the two problems
in reverse order, and begin by asking when coherence considerations are
in place in the revision of systems of belief.

2.

You can have contradictory or inconsistent beliefs, and when you do, some
thing's got to give: if you are rational, you modify your beliefs.1 Just how
contradictory beliefs are adjusted is not well understood. Logic textbooks
give out at this point, saying either that if you arrive at a contradiction, a
premise must be rejected (but without telling you which premise to reject),
or that if you arrive at a contradiction, you may legitimately infer anything
whatsoever. Such advice is respectively unhelpful and unrealistic.
However, while the process of revising incompatible beliefs is not well
understood, it is reasoning nonetheless. Reasoning consists not just in
drawing inferences from one's beliefs, but in figuring out what to do when

This content downloaded from 152.2.176.242 on Sun, 17 Jul 2016 11:37:22 UTC
All use subject to http://about.jstor.org/terms
DELIBERATIVE COHERENCE 65

one's beliefs turn out to be inconsistent. How is this done? Examples from
the history of science suggest that coherence is a central consideration
(Thagard 1992c), and day-to-day experience concurs. For instance, Max
answered the phone at his home number all day, so I assumed he was at
home; but Martha tells me they spent most of the day out. Of the various
explanations (Martha is lying, I was hallucinating, and so on), one makes
everything hang together better than the others. (Max has just gotten call
forwarding, and wouldn't that be just like him.) This new belief ties as much
as possible of my prior system of beliefs together in as coherent a manner
as possible; I adopt it and clean up my system of beliefs appropriately. The
revision of contradictory or inconsistent systems of belief \k evidently an
occasion for deploying coherence considerations.
Is there a motivational analog of inconsistent systems of belief? As a
matter of fact, there are two, corresponding to two senses of '^oal'. Talk
about goals can be about either intentions or desires. In the first sense,
to say that someone has a goal is to say something on the order of: l^e
is pursuing the goal, or is following a plan for attaining the goal. In the
second sense, to say that he has a goal is to say that he desires it, even if he
is not actively pursuing it or planning to pursue it. (Someone may desire
something, i.e., have a goal in the second sense, without intending to attain
it, i.e., without having a goal in the first sense, if, for example, attaining the
goal would be incompatible with something he thinks is more important.)
It is a commonplace that one can discover that one's goals conflict, that
is, that one cannot attain them all. I want, and had been planning to buy,
a new car and a ticket to Thailand. If I am rational, when I discover that I
cannot afford both, I adjust my intentions, giving up at least one of what
had been my goals (in the first sense).
It is less of a commonplace that I may be put in a similar position with
respect to goals in the second sense. I may have to stop intending to go to
Thailand this fall, but clearly I am not irrational if I keep on wanting to.
And it is in fact often taken for granted that one is never rationally required
to give up a desire that does not itself involve a factual mistake or that is
not derived by means-end reasoning from some further desire; this view
frequently gets described, with a certain amount of historical license, as
'Humean'. However, there are nevertheless desires that plausibly require
rational revision. Oedipus wants to marry Jocasta and wants not to marry
his mother. If he discovers before the ceremony that Jocasta is his mother,
he discovers that he has desires that are directly incompatible. It would be
untrue to the phenomena, and probably does not make sense, to suppose that
Oedipus merely gives up one intention or the other, leaving an unsatisfied
background desire. Oedipus' motivational system readjusts, after which

This content downloaded from 152.2.176.242 on Sun, 17 Jul 2016 11:37:22 UTC
All use subject to http://about.jstor.org/terms
66 ELIJAH MILLGRAM AND PAUL THAGARD

he need have no unsatisfied or unsatisfiable desires. (It is quite likely that


his desire to marry Jocasta will simply evaporate.) Directly incompatible
desires can require revision of the motivational system in which they are
embedded; this is the way in which goals in the second sense can form an
analog of an inconsistent system of belief. It is plausible that such revision
turns on considerations of coherence: of the incompatible goals or desires,
one preserves the goal or desire that most hangs together with other desires,
goals, and views about what is important; or, alternatively, one replaces
both goals with a new goal that hangs together with other goals, desires, and
views about what is important. If this is right, then directly incompatible
desires are an occasion for coherence-driven revision of one's motivational
system.2
Even when goals are not directly incompatible, conflict between them
may demand coherence-driven revision. Suppose that on my way down
town I get stuck in traffic and realize that I am not going to have time to
run all my errands; I must decide which errands to run. There is a widely
shared and more or less standard view about the right way to do this. On
the basis of the utilities of the different outcomes (on a related view, the
strengths of my desires) and the probabilities of the outcomes given actions
that are open to me, I am to calculate the expected utilities of attempting to
run different packages of errands, and select the package with the highest
expected utility. But this view is unrealistic: not only does it require more
calculations than I am prepared to perform (if there are n errands, there
will be 2n sets of errands I might run),3 but it requires information that
normally I do not have: precise assignments of utilities and probabilities
to the various options.
When goals of this kind conflict, people typically recover gracefully,
and they recover without relying on the information required by current
planning systems. Stuck in traffic, I sort through my errands, and settle on
some subset of them: typically, the group of errands that most fit together
with one another and with my other desires and plans. I will get the packing
tape, boxes, mothballs and scissors (it would not make much sense to get
any of these without the others), but leave the trip to the bank for some other
occasion, and forget the video rental and the trip to the post office entirely. I
do this without having more than the vaguest idea of what the utilities of the
different goals are (I am not able to say how much more strongly I desire
to have finished one errand rather than another), and without knowing just
how probable success in accomplishing one set of errands is, given that I
perform a particular action available to me. Examples like this suggest that
revising merely conflicting (as opposed to directly incompatible) goals is
also an occasion for deploying coherence considerations.4

This content downloaded from 152.2.176.242 on Sun, 17 Jul 2016 11:37:22 UTC
All use subject to http://about.jstor.org/terms
DELIBERATIVE COHERENCE 67

Even if it is granted that desires or goals can conflict in a way that makes
abandoning or otherwise revising some of them make sense, it might still
be objected that we do not have reason to think that the revision should be
such as to increase or maximize the coherence of the motivational system.
After all, there are many ways of removing conflict, not all of which favor
what we would be inclined to call coherence. For example, I could replace
the conflicting goals in my list of errands with a sudden intention to give up
eating fish, or to scour the Himalayas for tranquillity and spiritual uplift,
or, less fancifully, to get the packing tape but not the boxes, and see a movie
on impulse. Why coherence?
The real answer to this question lies in the connections between descrip
tive and normative theories of rationality, on the one hand, and our actual
practices, on the other; we will touch on these issues briefly below. As a
stopgap, here are two shorter answers. First, because goals compete for
limited resources, goals that hang together, and which naturally produce
overlapping plans of action, tend to be more easily jointly satisfied.5 A
policy of adopting new goals that are unrelated to my current goals ? for
example, replacing or supplementing my list of errands with a Himalayan
trek?makes it less likely that I will accomplish many of my goals: sudden
swerves squander sunk cost. Second, the point of practical reasoning is
to guide us in the decisions that shape our lives. When goals belong to
human beings, they are components of lives, and for something to be a
life, it has to be coherent. So practical reasoning should tend to increase
the coherence of systems of goals.6 (This consideration is of course only
indirectly applicable when we are considering not human beings but tools
such as planning systems.)
Perhaps a caveat is in place here as well. After a certain point, increas
ing coherence increases fragility. When everything fits together, finding
out you were wrong about one thing has ramifications for everything you
believe or desire. It is probably better for human beings if everything does
not fit together too tightly; if you have a number of relatively independent
motivational bases (job, family, and so on), you are better able to handle
having any one of them kicked out from under you. So there are prob
ably limits to the desirability of deliberative or motivational coherence.
However, we will not further consider these here.

3.

Inconsistency in one's system of belief can be an occasion for coherence


driven revision; but what is coherence? We will now briefly describe a
theory of explanatory coherence (TEC) and its implementation in a com

This content downloaded from 152.2.176.242 on Sun, 17 Jul 2016 11:37:22 UTC
All use subject to http://about.jstor.org/terms
68 ELIJAH MILLGRAM AND PAUL THAGARD

puter program (ECHO). Together, TEC and ECHO provide a general and
testable account of explanatory coherence. Both are described at much
greater length in Thagard (1989 and 1992c, ch. 4). Here we present a
concise and informal statement of TEC and ECHO adapted from Thagard
(1992a).
TEC consists of seven principles:

1. Symmetry. Explanatory coherence is a symmetric relation, unlike, say,


conditional probability.
2. Explanation.
(a) A hypothesis coheres with what it explains, which can either be
evidence or another hypothesis;
(b) hypotheses that together explain some other proposition cohere
with each other; and
(c) the more hypotheses it takes to explain something, the less the
degree of coherence.

3. Analogy. Similar hypotheses that explain similar pieces of evidence


cohere.
4. Data Priority. Propositions that describe the results of observations
have a degree of respectability of their own.
5. Contradiction. Contradictory propositions are incoherent with each
other.
6. Competition. If P and Q both explain a proposition, and if P and Q are
not explanatorily connected, then P and Q are incoherent with each
other. (P and Q are explanatorily connected if one explains the other
or if together they explain something.)
7. Acceptance. The acceptability of a proposition in a system of proposi
tions depends on its coherence with them.

ECHO takes as input statements whose interpretation is, e.g., that


hypotheses HI and H2 together explain evidence El. ECHO constructs
a network in which it represents each proposition by a network node called
a unit; ECHO constructs links between units in accord with TEC. Whenev
er, according to TEC, two propositions cohere, ECHO places an excitatory
link (with weight greater than 0) between the units that represent them.
Whenever two propositions incohere according to TEC, ECHO places an
inhibitory link (with weight less than 0) between them. In accord with
Principle 1, Symmetry, all links are symmetric. Given the input that HI
and H2 together explain evidence El, Principle 2, Explanation, requires
that ECHO produce excitatory links between the units representing the
following pairs of propositions: HI and El, H2 and El, and HI and H2.

This content downloaded from 152.2.176.242 on Sun, 17 Jul 2016 11:37:22 UTC
All use subject to http://about.jstor.org/terms
DELIBERATIVE COHERENCE 69

In accord with Principle 2(c), which provides a kind of simplicity consid


eration, the weights among the units given the above input are lower than
they would be if only one hypothesis had been needed in the explanation.
Principle 3 says that analogy can also be a source of coherence, and
ECHO constructs the appropriate excitatory links given input that says that
propositions are analogous. To implement Principle 4, Data Priority, ECHO
takes input specifying propositions as data and constructs an excitatory link
between each of those propositions and a special evidence unit. On the basis
of Principles 5 and 6, Contradiction and Competition, ECHO constructs
inhibitory links between units representing propositions that contradict
each other or compete to explain other propositions. Finally, Principle
7, Acceptability, is implemented in ECHO using a simple connectionist
method for updating the activation of a unit based on the units to which
it is linked. Units typically start with an activation level of 0, except for
the special evidence unit whose activation is always 1. Activation spreads
from it to the units representing data, and from them to units representing
propositions that explain data, and then to units representing higher-level
propositions that explain propositions that explain data, and so on. Units
connected by inhibitory links suppress one another's activations. Activation
of each unit is updated in parallel until the network has settled, i.e. all units
have achieved stable activation values; this usually takes fewer than 100
cycles.
When explanations compete, the clumps of nodes that are more highly
connected, more connected to data nodes, and so on - that is, the expla
nations that TEC holds have greater explanatory coherence - will inhibit
the smaller, less highly connected clumps of nodes (the accounts with less
explanatory coherence) that compete with them. So the relative explanato
ry coherence of two competing explanations can be read off the program
run.7
ECHO has been used to model theory evaluation in the history of
science,8 and to study the ways in which beginning students learn science
and how people perceive interpersonal relationships.9 Experiments indicate
that the model is robust (minor variations in the parameters do not affect
the final results of the program run). The model is extremely simple.
(In some ways it is over-simple. For example, it treats propositions as
atomic units, with no internal structure, and it has nothing to say about
how competing explanations are generated.) But it is good strategy to start
out with ruthlessly simplified techniques and theories. And despite the
simplicity, the results so far have been impressive, which strongly suggests
that ECHO must be doing something right.

This content downloaded from 152.2.176.242 on Sun, 17 Jul 2016 11:37:22 UTC
All use subject to http://about.jstor.org/terms
70 ELIJAH MILLGRAM AND PAUL THAGARD

The Theory of Explanatory Coherence (TEC) expressed by ECHO,


together with ECHO'S implementation, do everything it would be reason
able to want a definition of explanatory coherence to do. (Which is not to
say everything ever demanded of definitions; for example, ECHO does not
attempt to give a list of necessary and sufficient conditions for coherence.)
ECHO amounts to a principled algorithm for assessing explanatory coher
ence; by providing a way of determining when one explanatory hypothesis
has greater explanatory coherence than another, it defines a partial ordering
over explanations. And because its implementation is closely tied to the
principles that make up TEC, TEC provides a verbal expression of the
notion of coherence embodied by the program. Recall that the first and
foremost problem of coherence theory was that no one was able to say
what coherence is. ECHO and future ECHO-like programs provide a way
of addressing this problem.
We think that ECHO provides not just an account of explanatory coher
ence, but an account of explanatory coherence that is on the right track.
While this is too large a claim to be argued for now, this is a good time to
say a few words about the kinds of considerations that can be appealed to
to support it. Because ECHO runs can be compared to the coherence judg
ments made by scientists, juries, experimental subjects, and so on, ECHO
provides a way to assess the definition of coherence that it provides, and
to improve on it. ECHO'S picture of coherence is supported when its judg
ments agree with those of people who we think are getting, or have gotten,
it right. Furthermore, the possibility of comparison means that ECHO can
be improved by modifying it to bring its judgments into line with human
judgments; and this is in fact how ECHO has evolved.10
How can comparisons like these support the normative claims of a
model of rationality? Showing that people in fact think in such-and-such
a way does not show that they should think that way; they may simply be
making a very common mistake. It is important not just that people do adjust
their motivations in accordance with the model, but that, when we think
they are getting it right, they adjust their motivations in accordance with
the model. It is also important that the criterion is not simply behavioral.
However closely the model conforms to what we think is proper practice,
and however suggestive it is, until we can say why doing things this way
gets them right, we are not in a position to make normative claims for it to
stick. One reason, perhaps not the most important one, for insisting that the
internal workings of the model make sense is a pitfall in treating programs
as theories of rationality: once one understands what problem the program
is trying to solve, one may realize that the problem is computationally
intractable, and that no program will be able to solve it: both the program

This content downloaded from 152.2.176.242 on Sun, 17 Jul 2016 11:37:22 UTC
All use subject to http://about.jstor.org/terms
DELIBERATIVE COHERENCE 71

and the people it models may be doing what they do because the task
cannot be done without cutting corners.11
Perspicuous representation of an inference pattern is an important step
towards seeing if it is justified. First, it may be that perspicuous representa
tion is itself a good deal of the argument: a pattern of inference, e.g., modus
ponens, is displayed and recognized as a legitimate form of inference. And
secondly, the use of a pattern of inference under consideration to rationally
reconstruct available bodies of inference has traditionally been taken to be a
very strong argument for its legitimacy; for example, the ability of Frege
Russell logic to reconstruct large bodies of mathematical argument was
as telling as the straight-off plausibility of its rules and axioms. Rational
reconstructions of this kind normally require the perspicuous representa
tion of the basic patterns of inference that they deploy; without them, it is
hard to tell what has been successfully reconstructed and what has not.
The techniques predominantly used to represent patterns of inference
have been those familiar from logic textbooks: the inference is rewritten
in a formal language designed to highlight its structure and make it easy to
verify that only allowable transitions have taken place. These techniques
have proven enormously fruitful in some areas; however, they have not
been as successful when applied to such patterns of reasoning as inference
to the best explanation. Other representational techniques may turn out
to be the most helpful in studying patterns of inference that have resisted
the traditional style of formalization. (The right medium of representation
depends in part on what one is trying to represent: a picture can be worth
a thousand words, and there are things one can do with words that one
cannot do with pictures.) ECHO is an example of the use of computational
techniques to represent a pattern of inference that has hitherto resisted
formalization. ECHO can be used to display perspicuously what a certain
type of coherence-based inference comes to. It can be used descriptively,
to model actual cognitive processes; but it can also be used normative
ly, to rationally reconstruct them. Rational reconstructions of this kind
are arguments for the legitimacy of the pattern of inference that ECHO
represents.
We propose using ECHO as a model for addressing the analogous
problem of deliberative or motivational coherence. An ECHO-like program
can provide a way of rendering comparative judgments of deliberative
coherence, and in so doing, spell out the notion's content. Such a program
can be an experimental tool, allowing the same kind of testing and feedback
that ECHO makes possible in studying explanatory coherence. And using
such a program to develop rational reconstructions of available bodies of
inference would be a way of arguing for the legitimacy of the pattern of

This content downloaded from 152.2.176.242 on Sun, 17 Jul 2016 11:37:22 UTC
All use subject to http://about.jstor.org/terms
72 ELIJAH MILLGRAM AND PAUL THAGARD

inference modelled by the program. We are calling the first version of the
proposed program DECO, for Deliberative Coherence.

4.

Like ECHO, DECO is ruthlessly simplified. One simplification is that of


excluding from the model elements that are controversial or thought to be
poorly understood. There are perhaps several types of justification-bearing
link between the elements of a motivational structure. It has been suggest
ed that deliberative or motivational coherence involves relations between
values and preferences or desires; Kantians emphasize the connections
between intentions and general rules that are presupposed by them; and
Aristotelians discuss how relatively abstract and unspecific ends can be
transformed into more precisely specified ones. Analogy may also play
an important role in deliberative coherence.12 If connections like these are
in fact justification-bearing, then they certainly contribute to the coher
ence of a motivational system. Nevertheless, we propose to ignore them
in DECO, and restrict ourselves to the instrumental relation of facilitation
between goals and subgoals, because facilitation (understood to include
both means-end and constitutive connections) is uncontroversial in a way
that these more ambitious forms of justification are not. (There is a further
reason, which we will discuss in Section 5.)
The distinctions between goals, subgoals and actions are contextual
rather than intrinsic. That is, an element of a plan might be referred to as an
action on one occasion (when there is no immediate need to consider how
it is to be performed), as a goal on another (when the question as to how it
is to be performed arises), and as a subgoal on a third occasion (when the
ways in which it facilitates a further goal are being emphasized). For this
reason, DECO will not distinguish between actions, goals, and subgoals;
and ? a point of terminology ? we will refer to all of these indifferently as
goals or actions or deliberative factors.
We can identify principles governing the coherence of systems of goals
and actions connected by facilitation relations. These principles jointly
constitute a (minimalist) Theory of Deliberative Coherence (TDC).

1. Symmetry. Coherence and incoherence are symmetrical relations. If


a factor (goal or action) F\ coheres with another factor i<2, then Fi
coheres with F\.
2. Facilitation. Consider factors F\,.. .,Fn that together facilitate the
accomplishment of a factor Fq. Then
(a) each Fi coheres with Fq,

This content downloaded from 152.2.176.242 on Sun, 17 Jul 2016 11:37:22 UTC
All use subject to http://about.jstor.org/terms
DELIBERATIVE COHERENCE 73

(b) each Fi coheres with each other Fj9 and


(c) the greater the number of actions required, the less the coherence
among actions and goals.
3. Incompatibility.
(a) If two factors cannot both be performed or achieved, then they are
strongly incoherent.
(b) If two factors are difficult to perform or achieve jointly, then they
are weakly incoherent.
4. Goal priority. Some factors are desirable for non-coherence reasons.
5. Judgment. Facilitation and competition relations can depend on coher
ence with judgments about the acceptability of factual beliefs.
6. Decision. Decisions are made on the basis of an assessment of the
overall coherence of a set of actions and goals.

The Principle of Facilitation expresses the way in which the actions and
subsidiary goals that jointly contribute to achieving an end hang together
with each other and with the end they promote. If getting the car into
driving condition requires fixing the oil leak, replacing the brake hoses,
adjusting the idle time, and putting on a couple of new tires, it will make
some sense to do all of these if one is going to do any of them; and of
course doing these will make some sense if one is interested in getting the
car into running condition. We will consider later the question whether the
goal of getting the car running can be made sense of by its subgoals.
The third clause of the Principle of Facilitation expresses & preference
for simplicity. Other things being equal, simpler plans are to be preferred
to more complicated plans. This is because the point of a plan is that it
work, and simpler plans have a better chance of working. When the Space
Shuttle was being developed, it was proudly described as the most complex
vehicle ever built; as many realized after the Challenger catastrophe, pride
in its complexity was misplaced. Simpler plans also tend to consume fewer
resources; the surplus time, money and so on can be devoted to other goals.
The Principle of Incompatibility allows for rough and ready distinctions
between degrees of difficulty in performing actions or achieving goals.
Strongly incoherent goals may include those that are directly incompatible
(Oedipus wanting to marry Jocasta and not marry his mother), and goals
that, while not directly incompatible, are not jointly realizable (becoming
a concert pianist while earning a living as a jackhammer operator). Weakly
incoherent goals are compossible but difficult (getting one's errands done
and getting home in time to put the baby to bed). Notice that difficulty
can come in many forms: if it is hard to serve four soups for dinner, this
is not because one is physically unable to make four soups. Note also

This content downloaded from 152.2.176.242 on Sun, 17 Jul 2016 11:37:22 UTC
All use subject to http://about.jstor.org/terms
74 ELIJAH MILLGRAM AND PAUL THAGARD

that it is not intended that the Principle of Incompatibility require the fine
discriminations between probabilities demanded by the expected utility
model.
The Principle of Goal Priority acknowledges the fact that some things
are simply desired for their own sakes. But it does not state that goals
thought to be intrinsically valuable cannot be abandoned when they inco
here with one's other goals and actions. Goal priority also allows for other
non-coherence reasons for taking something to be desirable: that one's par
ents or peers say it is, or that experience has shown that it matters. Goals
that are desirable for non-coherence reasons we will call priority goals.
Goals designated as priority goals in the context of one problem may
have that status as a way of representing their coherence with further
goals and actions that are not now being considered. Recall the problem
of figuring out which group of errands to run when the traffic makes it
impossible to run them all. And suppose that one of my errands is to
collect the $5000 in cash that I have just won. I may choose to collect
the $5000, even though it does not cohere with any of the other errands
on my list, and let the apparently more coherent groups of errands go. To
model this choice, we can assign the goal of collecting the cash a very high
priority. But this may make it seem like coherence is playing a far less
important role than simple utility calculations.
This impression would be misleading. The importance of collecting
the $5000 is not merely brute. It is hard to make sense of its importance
except in terms of the ways in which it coheres with other goals to which I
am committed.13 Now I cannot consider the coherence of all my goals and
actions at once; Ranney (forthcoming) suggestively compares the ability to
focus one's attention on questions of coherence to a spotlight: coherence
oriented adjustments are only made among the elements within the current
circle of the beam. One way to represent the coherence of an element 'in
the beam' with further plans, aims, and so on not now under consideration
is to treat the factor in question as a high priority item. But the priority
of the factor in one's present deliberations represents not brute intrinsic
desirability but further coherence considerations.
The Principle of Judgment expresses the idea that whether one takes one
factor to facilitate or compete with another depends on what one believes
? and what one believes in turn depends on the coherence of competing
clusters of beliefs. If it seems very unlikely to you, in light of other things
you believe, that an action A is a way of bringing about a goal G, then A
should not cohere with G in virtue of being a means to G. There may be
other ways in which judgment matters for deliberative coherence. Some
judgments may cohere with actions (e.g., "It would be courageous to save

This content downloaded from 152.2.176.242 on Sun, 17 Jul 2016 11:37:22 UTC
All use subject to http://about.jstor.org/terms
DELIBERATIVE COHERENCE 75

the infant" plausibly coheres with saving the infant) even when facilitation
and compossibility are not at issue. DECO does not attempt to represent
coherence-inducing relations of this kind.

5.

The Principle of Symmetry might seem surprising, since the relation of


facilitation is not itself symmetrical. If A facilitates J5, it does not follow
that B facilitates A; if A coheres with B in virtue of facilitating it, why
should B cohere with A? The verbal answer to this objection is that
cohering and facilitating are different relations; cohering is something
like hanging together, and that is a symmetrical notion. The heuristic
answer is that using a similar principle in ECHO produced suggestive
and illuminating results, even though explanation is also an asymmetrical
relation; and that we are taking ECHO as a model for DECO. The pragmatic
answer is that experiments involving DECO and its descendants will put
us in a position to see whether this is a satisfactory principle by letting us
see what adopting the principle amounts to, and that until we have done
this it would be premature to take a stand on whether it is satisfactory.14
We will, however, indicate what substantive issue turns on this point.
A widely held view, usually called "instrumentalism" or "Humeanism",
has it that all practical reasoning ultimately proceeds from ends that are
simply given, to means to those ends. The thought here is that since one
can justify an action or a goal only by showing it to be a means to a further
goal, eventually practical justification must bottom out in goals that are not
themselves rationally justifiable or r?visable. (Aldous Huxley captured the
spirit of the view very nicely when he said, "Ends are ape-chosen: only the
means are man's.")
Instrumentalism is unrealistic. The complete unrevisability of one's
goals is a sign of neurosis and an unintelligent rigidity, not rationality.
People grow up, and part of growing up is reconsidering one's primary
goals. (Imagine what your life would look like if your ultimate goals were
still those you had as a small child.) And of course the importance of
reconsidering one's primary goals does not end when one stops growing
up.
The Principle of Symmetry allows us to model the revision of goals that
are not themselves the means to further goals. It allows a goal to be adopted
because, for example, one is already committed to its subgoals. Familiar
instances of this process include deciding to run in this summer's race
because one wants to run every day and to wear fashionable sports gear,15
choosing to become a music journalist because one likes music, travel

This content downloaded from 152.2.176.242 on Sun, 17 Jul 2016 11:37:22 UTC
All use subject to http://about.jstor.org/terms
76 ELIJAH MILLGRAM AND PAUL THAGARD

and essay writing, and opting to double-major when one finds that one is
in any case only two courses short of the second major. Adopting such
higher level goals organizes and further motivates pursuit of the original,
lower level goals, and makes it more likely that the lower level goals will be
attained; and it may have further benefits.16 Because DECO implements the
Principle of Symmetry, experimenting with DECO will help answer several
questions: What commitments are involved in the Principle of Symmetry?
Can the Principle of Symmetry account for the ways in which people
revise their primary goals? Answering these questions will help address the
further question: Is revision of this kind rational? And is rational revision of
one's primary goals possible with the extremely spare apparatus that DECO
allows itself? If one takes the only coherence-inducing relations to be those
that an instrumentalist would acknowledge to be legitimate, can coherence
based decision differ substantively from instrumentalist decision?17

6.

DECO is implemented by representing goals and actions as units in a con


nectionist network modeled on ECHO. The network differs from many
connectionist networks in that representation is not distributed: each factor
(goal or action) is represented by an individual unit. Coherence relations
between factors are represented by excitatory links between units, and inco
herence relations by inhibitory links; links can be given weights between
0 and 1. Goal priority is implemented by linking units representing intrin
sically desirable goals to a special unit with a fixed activation level. Units
update themselves in parallel in standard connectionist fashion; the updat
ed activation level of a unit is a function of its previous activation level,
the activation level of its neighbors, and the weights on the excitatory or
inhibitory links to its neighbors.18 Table I shows the inputs to DECO and
describes their effects.
During a DECO run, activation percolates through the network start
ing from the special unit and the priority goals that are connected to it.
Because activation spreads from priority goals, DECO runs generate de
facto asymmetry between priority and non-priority goals; although the
Principle of Symmetry allows priority goals to be activated and deactivat
ed by the rest of the network, they have a prominent role in determining
which factors are ultimately accepted. (This prominence helps explain the
instrumentalist view that intrinsic goals are the sole criterion for choosing
actions and are consequently beyond criticism. Priority goals do play a
central and important role in deliberation; the instrumentalist sees this, but
is mistaken about what that role is.) As activation spreads, larger and more

This content downloaded from 152.2.176.242 on Sun, 17 Jul 2016 11:37:22 UTC
All use subject to http://about.jstor.org/terms
DELIBERATIVE COHERENCE 77

TABLE I
DECO inputs and their effects

(goal 'G description Scoptional p r i o r i ty ) creates a unit to represent the factor


G. The description is for the benefit of human users; it has no significance for DECO. The
optional argument specifies the priority of factors that are intrinsically desirable or have
other non-coherence priority; a link with weight priority is created between G and the
special unit. The value priority can range between ? 1 and 1.
(action ' A description) creates a unit to represent action A.
(facilitate ' FI ' F2 &optional degree) states that FI facilitates F2 to the indi
cated degree, which can range between 0 and 1, with 1 as the default. An excitatory link is
created between FI and F2 with weight proportional to degree.
(facilitate ' (FI F2 F3) ' F4 ^optional degree) represents joint facilitation
by FI, F2 and F3 of FA. Excitatory links are created between FI, F2, F3 and FA with
weights proportional to degree and inversely proportional to the number of facilitating
factors.

( incompatible ' FI ' F2 degree) creates an inhibitory link between FI and F2


with weight proportional to degree, which can range between 0 and 1.

densely connected clusters of units suppress competing clusters that are


less connected. When the activation levels of the units stabilize, the plan
whose nodes have significantly positive activations has been selected, and
the plans whose nodes have near-zero or negative activation levels have
been rejected. (For more detail, see Thagard and Millgram 1995, Thagard
1992c.)
Not all features of DECO are meant to represent significant features
of thought, just as not all features of a logical notation (e.g., the shapes
of the symbols it uses) are meant to represent features of the thought
it reconstructs. In particular, activation levels come as precise numerical
values; but the precise numerical values have no representational function.
What matters is whether a unit's final activation is significantly positive,
on the one hand, or near or below zero, on the other; large differences
between the final activation levels of different units are also of interest.
When a unit's final activation level is neither significantly positive, nor
near or below zero, or when strongly incompatible factors wind up with
very similar final values, this is to be read as indicating that the coherence
considerations deployed do not settle whether the factors in question are
to be endorsed.
However, it would be a mistake to try to ignore too much of DECO 's
decision-making process. One might be tempted to think that DECO is
simply the implementation of a function, one mapping DECO's inputs
onto its outputs, and that consequently the apparatus of units and links and

This content downloaded from 152.2.176.242 on Sun, 17 Jul 2016 11:37:22 UTC
All use subject to http://about.jstor.org/terms
78 ELIJAH MILLGRAM AND PAUL THAGARD

parallel updating can be bypassed in favor of some simple calculation.19


Now if there is a simpler way of calculating the function, it is not obvious
what that is; if we want to specify the function computed by DECO, we
have no realistic alternative to DECO itself. More importantly, DECO is a
perspicuous representation of deliberation in a way that the imagined alter
native would fail to be. DECO's judgments make sense because people's
goals are concordant and discordant with one another, and because DECO
partially mirrors those harmonies and tensions. A perspicuous representa
tion is indispensable both for seeing why DECO produces the particular
judgments it does, and for adapting and improving DECO and the TDC it
implements.
The comparison to logical notation allows us to address a concern that
will have occurred to many readers. DECO, like ECHO, is sensitive to
the ways problems are encoded. This allows the user to bias the input in
favor of one plan or another, either intentionally, quasi-intentionally (that
is, self-deceptively), or unintentionally.20 But these difficulties are not
peculiar to ECHO or DECO. One must teach the students in one's logic
class not only to manipulate the formalism they are using to represent
inference patterns, but to render their problems into that formalism. There
are typically many ways to do this, and a biased or incompetent rendering
will produce mistaken conclusions, or no conclusions at all. This fact
does not impugn the use of formal logic to study patterns of inference,
and it should not impugn the use of DECO, either. Instead, it should be
addressed as a practical problem. We should try to answer questions like,
how pervasive are different kinds of bias, and how can they be identified
and controlled?21
Both ECHO and DECO are special cases of a more general coherence
assessment problem (Thagard and Verbeurgt 1995). At this point, ECHO
and DECO are structurally quite similar, differing primarily in that DECO
lacks a version of the Principle of Analogy. (Practical analogical reasoning,
we felt, would be too controversial to be given a place in the most severely
minimal TDC.) We expect successors of DECO to diverge from ECHO, as
they come to incorporate principles representing considerations like those
mentioned at the outset of Section 4.

7.

For a more concrete sense of how DECO works, and of how it can be used in
thinking about deliberative coherence, consider the following small agenda
planning problem. On the one hand, I could spend my day in Berkeley.
I would meet Florence at Cafe Venezia, get my brakes fixed, and xerox

This content downloaded from 152.2.176.242 on Sun, 17 Jul 2016 11:37:22 UTC
All use subject to http://about.jstor.org/terms
DELIBERATIVE COHERENCE 79
B2 -0.86 BI -0.78
A2 0.76 A3 0.76 ..- --; -O
B3 -0.86
AI 0.72 ^ / B4 -0.89
A9 0.75
B5 -0.88
A5 0.56
B6 -0.53
O
B7 -0.89
A6 0.43 A8 0.53
B8 -0.87

Figure 1. Network constructed for the agenda planning problem. Priority goals are repre
sented by filled circles, excitatory links by unbroken lines, and inhibitory links by dashes.
Final activation values are shown following unit labels. Not all inhibitory links are shown.

course materials at the library. (Call this "Plan A".) On the other, I could
run a number of errands in Oakland, such as going to the post office and
having the cat flea-dipped, and I could catch up on a number of tasks at
home. (Call this "Plan B".) Because time is limited, and because I cannot
be in both Berkeley and Oakland at the same time, the factors that comprise
the respective plans are for the most part incompatible. Plan A differs from
Plan B in that its elements hang together much more closely. Driving into
Berkeley facilitates, and so coheres with, getting to the restaurant, having
my brakes repaired, and meeting Florence; Florence and I can discuss the
syllabus for my new course, I will be able to follow her suggestions up at the
library later on, and so on. My various Oakland-based activities, however,
would hardly hang together at all; they have for the most part nothing to
do with each other. (Table II shows part of the input for this problem; full
listings of the inputs and DECO runs for the examples given in the text
are available at http://cogsci.uwaterloo.ca.) As one would expect, DECO
selects Plan A.
This agenda planning problem highlights the differences between a
coherence-based and a utility-based approach to planning. Plans A and B
each include the same number of priority goals, so from the utility-oriented
standpoint, the two plans should be running neck-to-neck in competition.
But when DECO chooses between them, the more coherent Plan A handily
trounces Plan B.

This content downloaded from 152.2.176.242 on Sun, 17 Jul 2016 11:37:22 UTC
All use subject to http://about.jstor.org/terms
80 ELIJAH MILLGRAM AND PAUL THAGARD

TABLE II
Sample DECO input from an agenda planning problem

(goal 'Al ''Try out new Italian restaurant'' 1)


(goal 'A2 "Meet Florence'' 1)
(goal 'A3 ''Discuss ideas for new course'' 1)
(goal 'A4 ''Drive into Berkeley'')
(goal 'A5 ''Get brakes looked at'' 1)
(goal 'A6 ''Leave car with mechanic'')
(goal 'A7 ''Find someplace to leave car'')
(goal 'A8 "Go to library")
(goal 'A9 "Look up materials for course" 1)
(goal 'A10 "Copy course materials" 1)
(facilitate 'Al 'A2)
(facilitate 'A2 'A3)
(facilitate 'A4 'Al)
(facilitate 'A4 'A2)
(facilitate 'A7 'A4)
(facilitate 'A6 'A7)
(facilitate 'A6 'A5)
(facilitate 'A4 'A8)
(goal 'BI "Go to post office")
(goal 'B2 "Mail package" 1)
(goal 'B3 "Meet Amy for coffee in Rockridge" 1)
(goal 7B4 "Draft letter to chairman" 1)
(goal 'B5 "Referee paper" 1)
(goal 'B6 "Buy children's book for niece" 1)
(goal 'B7 "Go to Waiden Pond books")
(goal 'B8 "Have cat flea dipped" 1)
(facilitate 'BI 'B2)
(facilitate 'B7 'B6)
(incompatible 'Bl 'A4 1)
(incompatible 'B2 'A5 1)

For purposes of illustration we have chosen two plans with sharply


differing levels of coherence, but DECO is sensitive to quite small dif
ferences in the internal connectedness of plans. DECO's behavior here is
typical of its treatment of similar problems. Let's now consider a some
what more ambitious menu-planning problem. This will show DECO at

This content downloaded from 152.2.176.242 on Sun, 17 Jul 2016 11:37:22 UTC
All use subject to http://about.jstor.org/terms
DELIBERATIVE COHERENCE 81

work on a very different kind of problem, and it will allow us to make a


further point. To anticipate, it is important that experimenting with DECO
points up its limitations and shortcomings?and, indirectly, limitations and
shortcomings of the minimalist Theory of Deliberative Coherence that it
mirrors.
Your goal of cooking for a dinnerparty involves making each of several
courses (e.g., appetizers, main course dishes, dessert), and ensuring that
there will be something to drink. These are subgoals with respect to your
goal of having a dinner party. You browse through cookbooks looking
for recipes. Because you pick out recipes that you want to eat, or that
sound interesting enough to try out, the recipes that you consider adopting
are both subgoals and intrinsically desired in their own right: making
avocado-grapefruit soup is a way of making an appetizer, but the soup also
sounds mouth-watering on its own. Recipes in turn require that you buy
the ingredients for them that you do not have in your pantry: if you make
the avocado soup, you will have to buy grapefruit juice. And these require
going to various stores: you will get the capers from the supermarket, the
apple-apricot juice from the health-food store, the melons from the produce
market, and the cilantro from the Chinese grocery. Other goals may figure
into your planning also, such as using up the ripe avocados and bananas.
In addition, the different elements of the assortment of goals and actions
from which you need to crystalize a plan stand in various incompatibility
relations. Some recipes compete for particular pots. Some flavors will
clash, and it would be a bad idea to serve several dishes with the same
primary ingredient. You do not want to make too many appetizers or main
course dishes. You can ask the guests to bring wine or dessert, but it would
be impolite to ask for both; if they do bring wine, you will not want to buy
any, and if they bring dessert, you will not need to make any. (Table III
shows part of the DECO input for this problem.)
Running DECO selects a subset of the actions and goals under consid
eration. These elements form a fairly coherent plan, one whose coherence
is expressed by the principles implemented in DECO. In this case, DECO
arrives at a very reasonable menu for the dinner party: a cold sweet soup
and a marinated broccoli salad as appetizers, Greek potatoes, dal and an
okra and tomato dish as the main course, and fruit salad for dessert. This
will mean shopping at the supermarket and the produce center, but skipping
the Chinese grocery and the health food store. Not surprisingly, DECO's
output looks very like the meals prepared by the author of the example.
DECO's choices here are by and large clear-cut. Selected factors have
final activation values of about 0.3 and up; rejected candidates have acti
vations around zero or below. (These results are fairly insensitive to minor

This content downloaded from 152.2.176.242 on Sun, 17 Jul 2016 11:37:22 UTC
All use subject to http://about.jstor.org/terms
82 ELIJAH MILLGRAM AND PAUL THAGARD

TABLE III
Sample DECO input from a menu-planning problem

(goal 'Gl ''Cook meal'' 1)


(goal 'G2 ''Make appetizers'')
(goal 'G3 ''Make main course'')
(goal 'G4 ''Make dessert'')
(goal 'G5 ''Have beverage'')
(facilitate '(G2 G3 G4 G5) 'Gl)
(goal 'Rl ''Make avocado-grapefruit soup'' 1)
(goal 'R2 ''Make Russian potato salad'' 1)
(goal 'R3 ''Make hummus'' 1)
(facilitate 'RI 'G2)
(facilitate 'R2 'G2)
(facilitate 'R3 'G2)
(incompatible 'Rl 'R2 .5)
(incompatible 'Rl 'R3 .5)
(incompatible 'R2 'R3 .5)
(goal 'Bl ''Buy grapefruit juice'')
(goal 'B2 ''Buy apple or apple-apricot juice'')
(facilitate '(Bl B2) 'Rl)

variations in DECO's parameters.) Only the selection of a beverage pro


duces ambivalence: the grapefruit juice comes in with a final rating of 0.24,
and asking the guests to bring wine scores 0.26. A second glance shows
why: neither of these goals is closely tied to many other goals. Coherence
considerations do not weigh in strongly for either acceptance or rejection.
One could respond to such an outcome by deciding on some other basis, or
by looking for further considerations with which these goals might cohere
or incohere.
Unlike the agenda planning problem, in which DECO was called upon
to choose between two preformulated plans, DECO is here being asked to
generate a plan from a welter of competing actions and goals. Now there
is a sense in which generating a plan is just a very complicated case of
selecting one: if one thinks of each subset of the factors with which DECO
is presented as a possible plan, then one can think of the task at hand as
that of selecting the best possible plan. Coherence clearly plays as large a
role in plan generation as in plan selection, so one would expect a entirely
adequate TDC, and the program that implements it, to do as well at one as
at the other.

This content downloaded from 152.2.176.242 on Sun, 17 Jul 2016 11:37:22 UTC
All use subject to http://about.jstor.org/terms
DELIBERATIVE COHERENCE 83

Overall, DECO does not do too badly, but the task is one which DECO
(and the current TDC) is not fully up to. While most of DECO's proposed
meal plan makes sense, there is a surprising category of exceptions: it
recommends purchasing some ingredients for recipes that will not be pre
pared. Obtaining the ingredients is facilitated by a trip to the supermarket,
so, by the Principle of Symmetry, DECO treats the trip to the supermarket
as a reason to get those ingredients. (People occasionally do this also:
"While I'm there, I might as well get these items too".) And while DECO
also treats buying the ingredients as a reason to make the dish, that rea
son is not always decisive. DECO does much better at selecting among
preformulated plans than it does at generating them.
We said earlier that one of the uses of DECO is to investigate the
implications of the Principle of Symmetry. Running DECO on examples
like this one shows what one is committed to by TDC, and so allows one to
make an informed judgment as to its plausibility. Since one does not want
to be committed to adopting subgoals but not the higher-level goal which
gives them their point, we seem to have a reason to reject the Principle of
Symmetry, and the TDC of which it is part.
That conclusion is not yet warranted. Responsibility for the surprising
result must be shared by the Principle of Symmetry and by one of DECO's
limitations, to which the result directs our attention. Connectionist net
works like DECO are clumsy at representing numerical and Boolean con
straints. Instrumental reasoning, however, involves the application of such
constraints. (Here the constraint is roughly that both the goal and some suf
ficient set of its subgoals must be selected, or neither.) We cannot assign
responsibility for such results to the Principle of Symmetry until we have
seen how to supplement DECO with a mechanism for representing such
constraints. (Activation-dependent links, of the kind used in CARE (Nel
son et al., 1994) are a promising candidate.) Perhaps this would take DECO
one step closer towards being a plan generator; in any case, the next step
to take in improving DECO is evidently to modify it (and its associated
TDC) to represent constraints of this kind. The meal-planning example
shows rather neatly how a computational representation of a theory of
deliberative coherence can not only allow the theory to be assessed by
comparing its judgments to those of human beings, but can focus attention
on those aspects of the theory that need change or further development.22

8.

Prior to the twentieth century, "Logic" was the name for the study of forms
of thinking and inference rather than the title of a branch of mathematics.

This content downloaded from 152.2.176.242 on Sun, 17 Jul 2016 11:37:22 UTC
All use subject to http://about.jstor.org/terms
84 ELIJAH MILLGRAM AND PAUL THAGARD

DECO is a tool for doing logic in this old-fashioned sense. It shares


with contemporary logical techniques the goal of representing patterns of
inference in ways that render them subject to assessment, that is, in ways
that help us address the question: is such-and-such a pattern of inference
correct?
Like ECHO, DECO represents a notion of coherence. Because DECO
produces judgments of comparative coherence and implicitly defines a
partial ordering of plans with respect to deliberative coherence, DECO
allows experimental results and normative judgements regarding particu
lar pairs of plans to be brought to bear in assessing the notion of coherence
it represents. That is, DECO gives empirical content to a notion of delib
erative coherence. Because DECO straightforwardly represents its Theory
of Deliberative Coherence (TDC), DECO can be used to assess TDC, both
as a whole, and principle by principle. Experimenting with DECO shows
what one is committed to by adopting this or that set of principles. We
believe that DECO is one of the first members of a family of tools that will
be used in investigating coherence-based patterns of practical inference.
Argument for the pattern of inference represented by DECO would con
sist primarily in extensive reconstructions of available bodies of practical
inference. This task has been barely begun, in part because the notion of
coherence modelled by DECO is still too preliminary to warrant embark
ing on such a project. DECO embodies only a partial understanding of
deliberative coherence: there are important aspects of the instrumental
or means-end component of deliberative coherence that DECO does not
represent; and we have deferred modeling other aspects of deliberative
coherence to future work. Despite its incompleteness, however, the notion
of coherence represented by DECO is interesting: it allows the adoption
of goals that do not facilitate one's priority goals (e.g., supergoals of goals
to which one is committed); it allows priority goals to be given up in the
face of conflicting commitments; and it selects plans in a way that seems
to resemble human decision. We believe that further work with DECO and
its successors is the most effective way to make clear what deliberative
coherence comes to, and what its role in practical reasoning should be.

NOTES

* We are grateful to Nina Amenta, Michael Bratman, Christoph Fehige, Harry Frankfurt,
Susan Hardy, Derek Hawley, Wilfried Hinsch, Jenann Ismael, Ziva Kunda, Nick Little
stone, Michael Ranney, Gabriel Richardson, Patricia Schank, Bill Talbott, Carol Varey and
UC/Berkeley's EMST Reasoning Group for helpful discussion. An ancestor of Section 2
benefitted from comments by and discussion with Alyssa Bernstein, Hilary Bok, Tamar
Gendler, Philip Klein, Tony Laden, Mitzi Lee, Robert Nozick, Hilary Putnam, Tim Scanlon,

This content downloaded from 152.2.176.242 on Sun, 17 Jul 2016 11:37:22 UTC
All use subject to http://about.jstor.org/terms
DELIBERATIVE COHERENCE 85

Sanford Shieh and, especially, Candace Vogler. We thank Susan Hardy and Roy Fleck for
programming assistance. Thagard's research was supported by the Natural Sciences and
Engineering Research Council of Canada.
1 This claim needs a certain amount of qualification. It may be very difficult to find and
extirpate hidden or hard-to-resolve inconsistencies in one's beliefs, and it might be entirely
rational not to work too hard to uncover these inconsistencies; similarly, it might be rational
not to resolve an acknowledged inconsistency of little practical import (Harman 1986,15ff).
We will ignore these issues here.
2 Saying when desires merely conflict and when they are directly incompatible and require
revision is a hard problem. We will not explore it further here. The idea that some desires
might demand adjustment in this way is due to Candace Vogler.
3 There may be faster alternatives to the 0(2n) calculation, but even these shortcuts are
likely to be beyond my capacities. (Bayesian networks are an analogous shortcut to updat
ing one's probability assignments, and they also prove to be often too demanding. See note
4, below.)
4 The argument here is analogous to Thagard's argument for preferring coherence-driven
progams such as ECHO to Bayesian networks (in press). Bayesian networks are computa
tionally expensive, and often intractable; and they require information about probabilities
that actual agents normally do not have.
5 For related discussion, see Pollack, 1991.
6 This is not to say that agents should adjust their systems of goals to make them more
coherent because they have the further goal of coherence. To see that this is the wrong
kind of justification for our reasoning the way we do, consider an analogous justification
for means-end reasoning, that we engage in it as a means to the goal of being means-end
reasoners.
7 The relative explanatory coherence of two competing theories is not determined by com
puting some index of explanatory coherence for each plan separately and comparing the
indices. (While indices of this kind - e.g., "harmany" - have been proposed, they are ill
suited for comparing graphs with very different numbers of nodes and links. And it is in any
case unclear that the notion of relative coherence should be well-defined for explanations
that are not competitors.) Rather, the fact that, at the end of an ECHO run, one explanation
remains activated while its competitor does not is interpreted to mean that the former is
more coherent than the latter.
8 Cases studied include Lavoisier's argument for the oxygen theory, Darwin's argument
for evolution by natural selection, arguments for and against continental drift, the case
of Copernicus versus Ptolemy, the case of Newton versus Descartes, and contemporary
debates about why the dinosaurs became extinct. Thagard (1989, 1991), Thagard and
Nowak (1990), Nowak and Thagard (1992a, 1992b).
9 Ranney and Thagard (1988), Ranney (forthcoming), Schank and Ranney (1991), Miller
and Read (1991), Read and Marcus-Newhall (1993).
10 See Thagard (1992c), note 1, p. 66.
11 For discussion of related issues, see Millgram (1991).
12 Hurley (1989), esp. chs. 10-11; Kant (1785/1981); Nell (1974); Broadie (1981); Richard
son (1987). On analogy, see Holyoak and Thagard (1989,1995), Thagard et al. (1990); for
related discussion, see Nozick (1993).
13 It is likely that most or all extremely high priority goals will turn out to have their
priority partly in virtue of coherence with plans, commitments, and so on, not now under
consideration, at any rate in the case of rational and reasonable agents. While 'I just feel
like it' may account for assignment of moderate priority, treating as overriding a goal that

This content downloaded from 152.2.176.242 on Sun, 17 Jul 2016 11:37:22 UTC
All use subject to http://about.jstor.org/terms
86 ELIJAH MILLGRAM AND PAUL THAGARD

fails to cohere with other considerations - e.g., insisting on washing the car rather than
driving to the airport in time for one's plane, because one feels that one has to, but for no
further reason - is a hallmark of compulsive behavior.
14 The question is complicated by the fact that although the Principle of Symmetry gov
erns the representations of systems of goals constructed by DECO, Goal Priority produces
asymmetries in the ways DECO treats goals. We will return to this point below.
15 The example is due to Mike Thau.
16 See Frankfurt, 1992.
17 One important question that needs to be addressed is how supergoal adoption can be
distinguished from intending unwanted side-effects. To adapt an example from Pollack,
1991, going swimming facilitates getting chlorine in my hair. Getting chlorine in my hair
is an unwanted side-effect because, although I accept it as part of the plan to go swimming,
I will not take further steps to make it happen, such as buying a bottle and rubbing it on my
scalp. DECO does not now model the distinction, and with a little ingenuity one can design
inputs for which DECO makes a choice that is tantamount to intending the side-effect.
However, DECO is a promising experimental testbed for exploring this problem.
18 The new activation level is given by

a3{t + 1) = aj(t)(l -9) +i net?max-a}(t)) if net j > 0


J I netj (a,j (t) ? min) otherwise

where aj (t + 1 ) is the unit's updated activation level; aj (t) is its previous activation level; 6
is a decay parameter; min is the minimum activation (?1); max is the maximum activation
(1); and netj is the net input to the unit, defined as ?\ Wijdi(t), where Wij is the weight
of the link between i and j.
19 A parallel mistake is made by Clark Glymour's discussion of ECHO (1992, pp. 470f);
for a reply, see Thagard (1992b).
20 For an example of unintentional bias, see note 22, below.
21 For discussion of the problem in ECHO, see Thagard (1992c, p. 89); for an experiment
that examines intercoder reliability in ECHO, see Schank and Ranney (1992).
22 Experimentation with the computational representation of a TDC can also show that
some problems are not nearly as pressing as they might seem a priori. For instance, it might
seem that coherentist choice would be unduly biased in favor of better understood, but
not necessarily better, plans. Imagine an engineer who works for the Coca-Cola company
designing Coke machines; during his break, he has to choose between coffee from the
coffee machine and a Coke from the adjacent Coke machine. Because he is familiar with
the internal workings of the one machine, but not the other, his representation of his plan
for getting a Coke is much more complex, detailed, and structured - in DECO's terms,
much more coherent - than his representation of the competing plan for getting a coffee.
But this is not a good reason for choosing the Coke over the coffee. (The objection is due to
Michael Bratman.) Now of course sometimes it does make good sense to choose the better
understood over the more poorly understood plan. But this is not one of those times. And,
as Bratman has emphasized (1987), because one of the points of having plans is to be able
to fill them in as one goes, incompleteness should not normally rule a plan out.
However, although one might think that DECO would be bound to choose the more
detailed of the two plans, the actual DECO runs show neither of the competing plans com
ing out a clear winner.

This content downloaded from 152.2.176.242 on Sun, 17 Jul 2016 11:37:22 UTC
All use subject to http://about.jstor.org/terms
DELIBERATIVE COHERENCE 87

REFERENCES

Bonjour, L.; 1985, The Structure of Empirical Knowledge. Harvard University Press, Cam
bridge, MA.
Bratman, M.; 1987, Intention, Plans and Practical Reason. Harvard University Press,
Cambridge, Mass.
Brink, D. O.: 1989, Moral Realism and the Foundations of Ethics. Cambridge University
Press, Cambridge.
Broadie, S. W.: 1987, The Problem of Practical Intellect in Aristotle's Ethics', in J.
Geary (ed.), Proceedings of the Boston Area Colloquium in Ancient Philosophy, Vol. Ill
University Press of America, Lanham, pp. 229-252.
Frankfurt, H.: 1992, 'On the Usefulness of Final Ends', lyyun 41, 3?19.
Glymour, C: 1992, 'Invasion of the Mind Snatchers', in R. Giere (ed.), Cognitive Models of
Science, Minnesota Studies in the Philosophy of Science, vol. 15, University of Minnesota
Press, Minneapolis, pp. 465?471.
Harman, G.: 1986, Change in View, MIT Press, Cambridge, MA.
Holyoak, K. and P. Thagard: 1989, Analogical Mapping by Constraint Satisfaction', Cog
nitive Science 13, 295-355.
Holyoak, K. and P. Thagard: 1995, Mental Leaps: Analogies in Creative Thought. MIT
Press/Bradford Books, Cambridge, Mass.
Hurley, S.: 1989, Natural Reasons, Oxford University Press, Oxford.
Kant, I.: 1785/1981, Grounding for the Metaphysics of Morals, Hackett, Indianapolis.
Miller, L. C. and S. J. Read: 1991, 'On the Coherence of Mental Models of Persons and
Relationships', in F. Fincham and G. Fletcher (eds), Cognition in Close Relationships,
Erlbaum, Hillsdale, NJ, pp. 69-99.
Millgram, E.: 1991, 'Harman's Hardness Arguments', Pacific Philosophical Quarterly
72(3), 181-202.
Nell, O.: 1974, Acting on Principle, Columbia University Press, New York.
Nelson, G., P. Thagard, and S. Hardy: 1994, 'Integrating Analogy with Rules and Explana
tions', in J. A. Barnden and K. J. Holyoak (eds), Advances in Connectionist and Neural
Computation Theory, Vol. 2: Analogical Connections, Ablex, Norwood, NJ, pp. 181?205.
Nowak, G. and P. Thagard: 1992a, 'Copernicus, Ptolemy, and Explanatory Coherence',
in R. Giere (ed.), Cognitive Models of Science, Minnesota Studies in the Philosophy of
Science, Vol. 15, University of Minnesota Press, Minneapolis, pp. 274-309.
Nowak, G. and P. Thagard: 1992b, 'Newton, Descartes, and Explanatory Coherence", in
R. Duschl and R. Hamilton (eds), Philosophy of Science, Cognitive Psychology and
Educational Theory and Practice, SUNY Press, Albany, pp. 69-115.
Nozick, R.: 1993, The Nature of Rationality, Princeton University Press, Princeton.
Pollack, M.: 1991, 'Overloading Intentions for Efficient Practical Reasoning', Nous 25(4),
513-36.
Ranney, M.: forthcoming, 'Explorations in Explanatory Coherence', in E. Bar-On and B.
Eylon (eds), Designing Intelligent Learning Environments: From Cognitive Analysis to
Computer Implementation, Ablex, Norwood, NJ.
Ranney, M. and P. Thagard: 1988, 'Explanatory Coherence and Belief Revision in Naive
Physics', in Proceedings of the Tenth Annual Conference of the Cognitive Science Society,
Erlbaum, Hillsdale, NJ, pp. 426^432.
Rawls, J.: 1971, A Theory of Justice, Harvard University Press, Cambridge, MA.

This content downloaded from 152.2.176.242 on Sun, 17 Jul 2016 11:37:22 UTC
All use subject to http://about.jstor.org/terms
8 8 ELIJAH MILLGRAM AND PAUL THAGARD

Read, S. J. and A. Marcus-Newhall: 1993, 'The Role of Explanatory Coherence in the


Construction of Social Explanations', Journal of Personality and Social Psychology 65,
429-47.
Richardson, H. S.: 1987, 'Aristotle on Practical Intellect and the Specification of Ends:
Response to Broadie', in J. Geary (ed.), Proceedings of the Boston Area Colloquium in
Ancient Philosophy, Vol. Ill, University Press of America, Lanham, pp. 253?261.
Richardson, H. S.: 1990, 'Specifying Norms as a Way to Resolve Concrete Ethical Prob
lems', Philosophy and Public Affairs 19(4), 279-310.
Schank, P. and M. Ranney: 1991, 'Modeling an Experimental Study of Explanatory Coher
ence', in Proceedings of the Thirteenth Annual Conference of the Cognitive Science
Society, Erlbaum, Hillsdale, NJ, pp. 892-897.
Schank, P. and M. Ranney: 1992, 'Assessing Explanatory Coherence: A New Method for
Integrating Verbal Data with Models of On-Line Belief Revision', in Proceedings of the
Fourteenth Annual Conference of the Cognitive Science Society, Erlbaum, Hillsdale, NJ,
pp. 599-604.
Thagard, P.: 1989, 'Explanatory Coherence', Behavioral and Brain Sciences 12, 435-67.
Thagard, P.: 1991, 'The Dinosaur Debate: Explanatory Coherence and the Problem of
Competing Hypotheses', in J. Pollock and R. Cummins (eds), Philosophy and AT. Essays
at the Interface, MIT Press/Bradford Books, Cambridge, MA, pp. 279-300.
Thagard, P.: 1992a, 'Adversarial Problem Solving: Modelling an Opponent Using Explana
tory Coherence', Cognitive Science 16, 123-49.
Thagard, P.: 1992b, 'Computing Coherence', in R. Giere (ed.), Cognitive Models of Science,
Minnesota Studies in the Philosophy of Science, Vol. 15, University of Minnesota Press,
Minneapolis, pp. 485-488.
Thagard, P.: 1992c, Conceptual Revolutions, Princeton University Press, Princeton.
Thagard, P. (in press): 'Probabilistic Networks and Explanatory Coherence', in P. O'Rorke
and G. Luger (eds.), Computing Explanations: AI Perspectives on Abduction, AAAI
Press, Menlo Park, CA.
Thagard, P. and E. Millgram: 1995, 'Inference to the Best Plan: A Coherence Theory of
Decision', in D. Leake and A. Ram (eds), Goal-Driven Learning, MIT Press, Cambridge,
MA, pp. 439-54.
Thagard, P. and G. Nowak: 1990, 'The Conceptual Structure of the Geological Revolution',
in J. Shrager and P. Langley (eds), Computational Models of Discovery and Theory
Formation, Morgan Kaufmann, San Mateo, pp. 259-310.
Thagard, P. and K. Verbeurgt: 1995, 'Coherence', unpublished manuscript.
Thagard, P., K. Holyoak, G. Nelson, and D. Gochfeld: 1990, 'Analog Retrieval by Constraint
Satisfaction', Artificial Intelligence 46, 259-310.

Elijah Millgram
Department of Philosophy
Princeton University
Princeton, NJ 08544
lije@clarity.princeton.edu

Paul Thagard
Department of Philosophy
University of Waterloo
Waterloo, Ont. N2L3G1
prthagar@watarts.uwaterloo.ca.

This content downloaded from 152.2.176.242 on Sun, 17 Jul 2016 11:37:22 UTC
All use subject to http://about.jstor.org/terms

You might also like