You are on page 1of 151

PHILOSOPHICAL TOPICS

VOL. 36, NO. 2, FALL 2008

Preface
Robert Brandom delivered his John Locke Lectures, “Between Saying and Doing:
Towards an Analytic Pragmatism,” in Oxford in May and June of 2006. In April
2007, in Prague, a conference devoted to discussing the lectures was held, organized
by the Institute of Philosophy of the Academy of Sciences of the Czech Republic
and the Faculty of Philosophy & Arts of the Charles XII University and with finan-
cial support from the Andrew W. Mellon Foundation. The conference began with
a “School of Inferentialism,” a series of seminars by distinguished philosophers cov-
ering the background to the lectures in Brandom’s work and elsewhere. Professor
Brandom then re-delivered the lectures; each of the six was followed by a critical
paper and Professor Brandom’s response. The critics were John McDowell, John
MacFarlane, Pirmin Stekeler-Weithofer, Huw Price, Jaroslav Peregrin, and Sebastian
Rödl. This volume is made up of their (revised) papers and Professor Brandom’s
responses, along with a paper by Paul Horwich summarizing the material he con-
tributed to the “School of Inferentialism” and an introductory essay by professor
Brandom himself.
With the publication of the lectures in book form, it is hoped that the papers
from the Prague conference will serve as a guide and stimulus to readers and recap-
ture what Professor Brandom describes as “intense and thought-provoking conver-
sations about this material” at the Prague conference.1
I am most grateful to the contributors to this volume and especially to Pro-
fessor Brandom for his help at every stage.
Edward Minar
Editor, Philosophical Topics

NOTE

1. Robert Brandom, Between Saying and Doing: Towards an Analytic Pragmatism (Oxford: Oxford
University Press, 2008), xx.

v
PHILOSOPHICAL TOPICS
VOL. 36, NO. 2, FALL 2008

Towards an Analytic Pragmatism

Robert Brandom
University of Pittsburgh

Within the Anglophone tradition, pragmatism has often appeared as a current of


thought that stands apart from, and indeed runs in opposition to, the mainstream
of analytic philosophy. This is true whether one uses ‘pragmatist’ in a narrow sense
tailored to the triumvirate of Peirce, James, and Dewey (here one might think of
Russell’s dismissive responses to the latter two), or in a more capacious sense that
includes the early Heidegger, the later Wittgenstein, and, more recently, neo-prag-
matists such as Rorty and Putnam. There are good reasons on both sides for adopt-
ing somewhat adversarial stances, but I think that when we examine them more
closely it becomes possible to see the outlines of a common project, in the service
of which the two camps might find themselves joining forces. In my 2006 John
Locke lectures, entitled Between Saying and Doing: Towards an Analytic Pragmatism
(Oxford University Press, 2008), I explore in more detail one way of pursuing such
a project. In this essay I want to offer a sketch of the basic understanding of the
principal aims of the two movements that motivates that more extended discus-
sion, and to indicate in general terms the sort of pragmatic semantic analysis (not,
I will be insisting, an oxymoron) that might emerge from unifying their only appar-
ently disparate concerns. The intended spirit is irenic, synthetic, and constructive.

I. THE CLASSICAL PROJECT OF ANALYSIS

I think of analytic philosophy as having at its center a concern with semantic rela-
tions between what I will call ‘vocabularies’. Its characteristic form of question is

1
whether and in what way one can make sense of the meanings expressed by one
kind of locution in terms of the meanings expressed by another kind of locution.
So, for instance, two early paradigmatic projects were to show that everything
expressible in the vocabulary of number-theory, and again, everything expressible
using definite descriptions, is expressible already in the vocabulary of first-order
quantificational logic with identity.
The nature of the key kind of semantic relation between vocabularies has been
variously characterized during the history of analytic philosophy: as analysis, defi-
nition, paraphrase, translation, reduction of different sorts, truth-making, and var-
ious kinds of supervenience—to name just a few contenders. In each case, however,
it is characteristic of classical analytic philosophy that logical vocabulary is accorded
a privileged role in specifying these semantic relations. It has always been taken at
least to be licit to appeal to logical vocabulary in elaborating the relation between
analysandum and analysans—target vocabulary and base vocabulary. I will refer to
this aspect of the analytic project as its commitment to ‘semantic logicism’.1
If we ask which were the vocabulary-kinds whose semantic relations it was
thought to be important to investigate during this period, at least two core programs
of classical analytic philosophy show up: empiricism and naturalism. These vener-
able modern philosophical traditions in epistemology and ontology respectively
were transformed in the twentieth century, first by being transposed into a seman-
tic key, and second by the application of the newly available logical vocabulary to
the self-consciously semantic programs they then became.
As base vocabularies, different species of empiricism appealed to phenomenal
vocabulary, expressing how things appear, or to secondary-quality vocabulary, or,
less demandingly, to observational vocabulary. Typical target vocabularies include
objective vocabulary formulating claims about how things actually are (as opposed
to how they merely appear), primary-quality vocabulary, theoretical vocabulary,
and modal, normative, and semantic vocabularies. The generic challenge is to show
how what is expressed by the use of such target vocabularies can be reconstructed
from what is expressed by the base vocabulary, when it is elaborated by the use of
logical vocabulary.
As base vocabularies, different species of naturalism appealed to the vocabu-
lary of fundamental physics, or to the vocabulary of the natural sciences (including
the special sciences) more generally, or just to objective descriptive vocabulary, even
when not regimented by incorporation into explicit scientific theories. Typical tar-
gets include normative, semantic, and intentional vocabularies.

II. THE PRAGMATIST CHALLENGE

What I want to call the “classical project of analysis,” then, aims to exhibit the mean-
ings expressed by various target vocabularies as intelligible by means of the logical

2
elaboration of the meanings expressed by base vocabularies thought to be privi-
leged in some important respects—epistemological, ontological, or semantic—
relative to those others. This enterprise is visible in its purest form in what I have
called the “core programs” of empiricism and naturalism, in their various forms. In
my view the most significant conceptual development in this tradition—the biggest
thing that ever happened to it—is the pragmatist challenge to it that was mounted
during the middle years of the twentieth century. Generically, this movement of
thought amounts to a displacement from the center of philosophical attention of
the notion of meaning in favor of that of use: in suitably broad senses of those
terms, replacing concern with semantics by concern with pragmatics. The towering
figure behind this conceptual sea change is, of course, Wittgenstein. In characteriz-
ing it, however, it will be useful to approach his radical and comprehensive critique
by means of some more local, semantically corrosive argumentative appeals to the
practices of deploying various vocabularies rather than the meanings they express.
Wilfrid Sellars (one of my particular heroes) criticizes the empiricist core pro-
gram of the classical project of analysis on the basis of what one must do in order to
use various vocabularies, and so to count as saying or thinking various things. He
argues that none of the various candidates for empiricist base vocabularies are prac-
tically autonomous, that is, could be deployed in a language-game one played though
one played no other. For instance, no discursive practice can consist entirely of mak-
ing noninferential observation reports. For such reliably differentially elicited
responses qualify as conceptually contentful or cognitively significant only insofar as
they can serve as premises from which it is appropriate to draw conclusions, that is,
as reasons for other judgments. Drawing such conclusions is applying concepts
inferentially—that is, precisely not making noninferential observational use of
them.2
Quine offers an even broader pragmatist objection, not only to the empiricist
program, but to essential aspects of the whole analytic semantic project. For he
attacks the very notion of meaning it presupposes. Quine is what I have elsewhere
called a “methodological” pragmatist.3 That is, he takes it that the whole point of a
theory of meaning is to explain, codify, or illuminate features of the use of linguis-
tic expressions. He, like Dummett, endorses the analogy: meaning is to use as the-
ory is to observation. And he argues that postulating meanings associated with bits
of vocabulary yields a bad theory of discursive practice.
If there were such things as meanings that determine how it would be correct
to use our expressions, then those meanings would at least have to determine the
inferential roles of those expressions: what follows from applying them, what apply-
ing them rules out, what is good evidence for or against doing so. But what follows
from what depends on what else is true—on laws of nature and obscure contingent
facts—that is, on what claims can serve as auxiliary hypotheses or collateral prem-
ises in those inferences. If we look at what practical abilities are required to deploy
various bits of vocabulary—at what one has to be able to do in order to count as say-
ing something with them—we do not find any special set of these whose practical

3
significance can be understood as pragmatically distinctive of semantically neces-
sary or sufficient conditions.4
Quine thought one could save at least the naturalist program by retreating
semantically to the level of reference and truth-conditions. James and Dewey appeal
to the same sort of methodological pragmatism in support of more sweeping sorts
of semantic revisionism—pursuing programs that Rorty, for instance, argues should
be understood as more rejectionist than properly revisionist. And under the ban-
ner “Don’t look to the meaning, look to the use,” Wittgenstein further radicalizes
the pragmatist critique of semantics. Pointing out to begin with that one cannot
assume that uses of singular terms have the job of picking out objects, nor that
declarative sentences are in the business of stating facts, he goes on to deny, in
effect, that such uses even form a privileged center, on the basis of which one can
understand more peripheral ones. (“Language,” he says, “has no downtown.”)
I take it that Wittgenstein also takes the home language-game of the concept
of meaning to be explanation of how expressions are correctly used. And he is pro-
foundly skeptical about the utility or applicability of the model of postulation,
explanation, and theoretical systematization in the case of discursive practices—
about the possibility of systematically deriving aspects of correct use from assigned
meanings. Seen from this perspective, the idea of the classical project of analysis is
to codify, using logical vocabulary, the meanings expressed by one vocabulary—
from which we are to derive proprieties of its use—from the meanings expressed
by some other vocabulary—from which we can derive proprieties of its use. One
idea, I think is that this enterprise makes sense only if we think of the uses as species
of a genus—of them all being the same general kind of use, say describing, stating
facts, or representing states of affairs. This may seem plausible if we focus on a very
restricted set of uses—just as, in the case of tools, we might be impressed to notice
that nails and hammer, screws and screwdriver, glue and brush all have the func-
tion of attaching more-or-less flat things to one another. So we can think of declar-
ative sentences as stating empirical, physical, normative, modal, and intentional
facts, making claims about such states of affairs (even if we then find ourselves
metaphysically puzzled about the nature of the fact-kinds to which we have thereby
committed ourselves). But if we think of the uses as very different, if we think also
about the carpenter’s level, pencil, and tool belt, if we think of linguistic practice as
a motley, of uses as not coming in a simple, or systematic, or even determinate vari-
ety, then the very idea that there is such a thing as meanings that permit the codifi-
cation of proprieties of quite disparate kinds of use—even with liberal use of logical
elaboration of the meanings—becomes contentious and in need of justification
both in general and in each particular case.
More specifically, Wittgenstein uses the image of “family resemblances” to urge
that the kinds into which linguistic practices and the vocabularies caught up in
them are functionally sorted—what belong together in boxes labeled ‘game’, ‘name’,
‘description’, ‘assertion’, ‘observation’, and so on—do not typically admit of specifi-
cation in terms of underlying principles specifiable in other vocabularies, whether

4
by genus and differentia(e) or any other kind of explicit rule or definition. It is easy
to understand this line of thought as entailing a straightforward denial of the pos-
sibility of semantic analysis in the classical sense.
One thought underlying these observations about the unsystematic, unsur-
veyable variety of kinds of uses of expressions and about the uncodifiable charac-
ter of those kinds concerns the essentially dynamic character of linguistic practice.
I think Wittgenstein thinks that an absolutely fundamental discursive phenomenon
is the way in which the abilities required to deploy one vocabulary can be practi-
cally extended, elaborated, or developed so as to constitute the ability to deploy
some further vocabulary, or to deploy the old vocabulary in quite different ways.
Many of his thought-experiments concern this sort of process of pragmatic projec-
tion of one practice into another. We are asked to imagine a community that uses
proper names only for people, but then extends the practice to include rivers. There
is no guarantee that interlocutors can master the extended practice, building on
what they can already do. But if they can, then they will have changed the only
sessences proper-name usage could be taken to have had.5 In the old practice it

always made sense to ask for the identity of the mother and father of the named
item; in the new practice, that question is often senseless. Again, we are asked to
imagine a community that talked about having gold or silver in one’s teeth, and
extends that practice to talk about having pain in one’s teeth. If as a matter of con-
tingent fact the practitioners can learn to use the expression ‘in’ in the new way,
building on but adapting the old, they will have fundamentally changed the smean-
ings of ‘in’. In the old practice it made sense to ask where the gold was before it was
in one’s tooth; in the new practice asking where the pain was before it was in the
tooth can lead only to a distinctively philosophical kind of puzzlement.6
At every stage, what practical extensions of a given practice are possible for the
practitioners can turn on features of their embodiment, lives, environment, and
history that are contingent and wholly particular to them. And which of those devel-
opments actually took place, and in what order, can turn on any obscure fact. The
reason vocabulary-kinds resist specification by rules, principles, definitions, or
meanings expressed in other vocabularies is that they are the current time-slices of
processes of development of practices that have this dynamic character—and that
is why the collection of uses that is the current cumulative and collective result of
such developments-by-practical-projection is a motley.7 If that is right, then any
codification or theoretical systematization of the uses of those vocabulary-kinds by
associating with them meanings that determine which uses are correct will, if at all
successful, be successful only contingently, locally, and temporarily. Semantics on
this view is an inherently Procrustean enterprise, which can proceed only by theo-
retically privileging some aspects of the use of a vocabulary that are not at all prac-
tically privileged, and spawning philosophical puzzlement about the intelligibility
of the rest.8 On this conception, the classical project of analysis is disease that rests
on a fundamental, if perennial, misunderstanding—one that can be removed or
ameliorated only by heeding the advice to replace concern with meaning by concern

5
with use. The recommended philosophical attitude to discursive practice is accord-
ingly descriptive particularism, theoretical quietism, and semantic pessimism.

III. EXTENDING THE PROJECT OF ANALYSIS: PRAGMATICALLY


MEDIATED SEMANTIC RELATIONS

On this account Wittgenstein is putting in place a picture of discursive meaning-


fulness or significance that is very different from that on which the classical project
of analysis is predicated. In place of semantics, we are encouraged to do pragmat-
ics—not in the sense of Kaplan and Stalnaker, which is really the semantics of
token-reflexive expressions, nor again in the sense of Grice, which addresses con-
versational heuristics in terms that presuppose a prior, independent, classical
semantics—but ‘pragmatics’ in the sense of the study of the use of expressions in
virtue of which they are meaningful at all. To the formal, mathematically inspired
tradition of Frege, Russell, Carnap, and Tarski, culminating in model-theoretic and
possible-worlds semantics, is opposed an anthropological, natural-historical, social-
practical inquiry aimed both at demystifying our discursive doings, and at deflat-
ing philosophers’ systematic and theoretical ambitions regarding them. I think that
contemporary philosophers of language have tended to draw this opposition in the
starkest possible terms, treating these approaches as mutually exclusive, hence as
requiring that a choice be made between them, thereby marking out a substantial
sociological faultline in the discipline. Those who are moved by the pragmatist pic-
ture generally accept the particularist, quietist conclusions Wittgenstein seems to
have drawn from it. And those committed to some version of the project of seman-
tic analysis have felt obliged to deny the significance of pragmatics in this sense, or
at the least to dismiss it as irrelevant to properly semantic concerns. In the most
extreme cases, the attitudes of anti-pragmatist philosophers of language to
Wittgenstein’s picture verges on that of the Victorian lady to Darwin’s theory: One
hopes that it is not true, and that if it is true, at least that it not become generally
known.
But I do not think we are obliged to choose between these approaches. They
should be seen as complementing rather than competing with one another. Semantics
and pragmatics, concern with meaning and concern with use, ought surely to be
understood as aspects of one, more comprehensive, picture of the discursive.
Pragmatist considerations do not oblige us to focus on pragmatics to the exclusion
of semantics; we can deepen our semantics by the addition of pragmatics. If we
extract consequences from the pragmatists’ observations somewhat more modestly
and construe the analytic project somewhat more broadly, the two will be seen not
only as compatible, but as mutually illuminating. If we approach the pragmatists’
observations in an analytic spirit, we can understand pragmatics as providing special
resources for extending and expanding the analytic semantic project, from exclu-

6
sive concern with relations among meanings to encompass also relations between
meaning and use. In its most ambitious form, as in the present project, such an
enterprise would aspire to articulate something like a logic of the relations between
meaning and use.
If we leave open the possibility that the use of some vocabulary may be illumi-
nated by taking it to express some sort of meaning or content—that is, if we do not
from the beginning embrace theoretical semantic nihilism—then the most impor-
tant positive pragmatist insight will be one complementary to the methodological
pragmatism I have already identified. The thought underlying the pragmatist line
of thought is that what makes some bit of vocabulary mean what it does is how it
is used. What we could call semantic pragmatism is the view that the only explana-
tion there could be for how a given meaning gets associated with a vocabulary is to
be found in the use of that vocabulary: the practices by which that meaning is con-
ferred or the abilities whose exercise constitutes deploying a vocabulary with that
meaning. To broaden the classical project of analysis in the light of the pragmatists’
insistence on the centrality of pragmatics, we can focus on this fundamental relation
between use and meaning, between practices or practical abilities and vocabularies.
We must look at what it is to use locutions as expressing meanings—that is, at what
one must do in order to count as saying what the vocabulary lets practitioners
express. I am going to call this kind of relation “practice-vocabulary sufficiency”—
or usually, “PV-sufficiency,” for short. It obtains when engaging in a specified set of
practices or exercising a specified set of abilities9 is sufficient for someone to count
as deploying a specified vocabulary.
Of course it matters a lot how we think about these content-conferring,
vocabulary-deploying practices or abilities. The semantic pragmatist’s claim that
use confers meaning (so talk of practices or the exercise of abilities as deploying
vocabularies) reverts to triviality if we are allowed to talk about “using the tilde to
express negation,” “the ability to mean red by the word ‘red’,” or “the capacity to
refer to electrons by the word ‘electron’” (or, I think, even intentions so to refer).
And that is to say that the interest of the PV-sufficiency of some set of practices or
abilities for the deploying of a vocabulary is quite sensitive to the vocabulary in
which we specify those practices-or-abilities. Talk of practices-or-abilities has a defi-
nite sense only insofar as it is relativized to the vocabulary in which those practices-
or-abilities are specified. And that means that besides PV-sufficiency, we should
consider a second basic meaning-use relation: “vocabulary-practice sufficiency,” or
just “VP-sufficiency,” is the relation that holds between a vocabulary and a set of
practices-or-abilities when that vocabulary is sufficient to specify those practices-
or-abilities.10 VP-sufficient vocabularies that specify PV-sufficient practices let one
say what it is one must do to count as engaging in those practices or exercising those
abilities, and so to deploy a vocabulary to say something.
PV-sufficiency and VP-sufficiency are two basic meaning-use relations (MURs).
In terms of those basic relations, we can define a more complex relation: the rela-
tion that holds between vocabulary V’ and vocabulary V when V’ is VP-sufficient to

7
specify practices-or-abilities P that are PV-sufficient to deploy vocabulary V. This
VV-relation is the composition of the two basic MURs. When it obtains I will say
that V’ is a pragmatic metavocabulary for V. It allows one to say what one must do
in order to count as saying the things expressed by vocabulary V. We can present
this relation graphically in a meaning-use diagram (MUD):
The conventions of this diagram are:

Meaning-Use Diagram #1:


Pragmatic
Metavocabulary

Res: VV-1,2
1: PV-suff

V' 2: VP-suff
P

• Vocabularies are shown as ovals, practices-or-abilities as (rounded) rec-


tangles.
• Basic meaning-use relations are indicated by solid arrows, numbered, and
labeled as to kind of relation.
• Resultant meaning-use relations are indicated by dotted arrows, num-
bered, and labeled as to kind and the basic MURs from which they result.

The idea is that a resultant MUR is the relation that obtains when all of the basic
MURs listed on its label obtain.
Being a pragmatic metavocabulary is the simplest species of the genus I want
to introduce here. It is a pragmatically mediated semantic relation between vocabu-
laries. It is pragmatically mediated by the practices-or-abilities that are specified by
one of the vocabularies (which say what counts as doing that) and that deploy or are
the use of the other vocabulary (what one says by doing that). The semantic rela-
tion that is established thereby between the two vocabularies is of a distinctive sort,
quite different from, for instance, definability, translatability, reducibility, and
supervenience. My basic suggestion for extending the classical project of analy-
sis so as to incorporate as essential positive elements the insights that animate
the pragmatist critique of that project is that, alongside the classical semantic
relations between vocabularies that project has traditionally appealed to, we con-

8
sider also pragmatically mediated ones—of which the relation of being a prag-
matic metavocabulary is a paradigm.
Under what circumstances would this simplest pragmatically mediated seman-
tic relation be philosophically interesting, when considered in connection with the
sorts of vocabularies that have traditionally been of most interest to classical analy-
sis? At least one sort of result that could be of considerable potential significance, I
think, is if it turned out that in some cases pragmatic metavocabularies exist that
differ significantly in their expressive power from the vocabularies for the deploy-
ment of which they specify sufficient practices-or-abilities. I will call that phenom-
enon “pragmatic expressive bootstrapping.” If one vocabulary is strictly weaker in
expressive power than the other, I will call that relations—though of course it is not
expressed in terms of the machinery I have been introducing—is Huw Price’s prag-
matic normative naturalism.11 He argues, in effect, that although normative vocab-
ulary is not reducible to naturalistic vocabulary, it might still be possible to say in
wholly naturalistic vocabulary what one must do in order to be using normative
vocabulary. If such a claim about the existence of an expressively bootstrapping nat-
uralistic pragmatic metavocabulary for normative vocabulary could be made out,
it would evidently be an important chapter in the development of the naturalist
core program of the classical project of philosophical analysis. It would be a para-
digm of the sort of payoff we could expect from extending that analytic project by
including pragmatically mediated semantic relations. (Later on I’ll discuss briefly a
claim of this shape concerning indexical vocabulary.)
The meaning-use diagram of the pragmatically mediated semantic relation of
being a pragmatic metavocabulary illustrates a distinctive kind of analysis of that
relation. It exhibits that relation as the resultant, by composition, of the two basic
meaning-use relations of PV-sufficiency and VP-sufficiency. A complex MUR is
analyzed as the product of operations applied to basic MURs. This is meaning-use
analysis. The same analytic apparatus applies also to more complex pragmatically
mediated semantic relations. Consider one of the pragmatist criticisms that Sellars
addresses to the empiricist core program of the classical analytic project. It turns
on the assertion of the pragmatic dependence of one set of vocabulary-deploying
practices-or-abilities on another.
Because he thinks part of what one is doing in saying how things merely appear
is withholding a commitment to their actually being that way, and because one can-
not be understood as withholding a commitment that one cannot undertake, Sellars
concludes that one cannot have the ability to say or think how things seem or
appear unless one also has the ability to make claims about how things actually are.
In effect, this Sellarsian pragmatist critique of the phenomenalist form of empiri-
cism consists in the claim that the practices that are PV-sufficient for ‘is’-j talk are
PP-necessary for the practices that are PV-sufficient for ‘looks’-j talk.12 That prag-
matic dependence of practices-or-abilities then induces a resultant pragmatically
mediated semantic relation between the vocabularies. The meaning-use diagram
for this claim is:

9
Meaning-Use Diagram #2:
Pragmatically Mediated
Semantic Presupposition

Res1:VV 1,2,3
Vlooks- f Vis- f

3:PV-suff 1:PV-suff

Plooks- f Pis- f
2:PP-nec

The resultant MUR here is a kind of complex, pragmatically mediated, VV-neces-


sity, or semantic presupposition.
In fact, although Sellars’s argument for the crucial PP-necessity relation of
pragmatic dependence of one set of vocabulary-deploying practices-or-abilities on
another is different, his argument against the observational version of empiri-
cism—the claim that purely noninferential, observational uses do not form an
autonomous discursive practice, but presuppose inferential uses—has exactly the
same form:

Meaning-Use Diagram #3:


Pragmatically Mediated
Semantic Presupposition

Res1:VV 1,2,3
Vobservational Vinferential

3:PV-suff 1:PV-suff

Pobservational Pinferential
2:PP-nec

10
For these cases, we can say something further about the nature of the pragmatically
mediated semantic relation that is analyzed as the resultant MUR in these dia-
grams. For instead of jumping directly to this VV resultant MUR, we could have
put in the composition of the PP-necessity and second PV-sufficiency relation,
yielding a kind of complex pragmatic presupposition:

Meaning-Use Diagram #4:


Composition

Vlooks- f Vis- f

Res2:PV 2,3
3:PV-suff 1:PV-suff

Plooks- f Pis- f
2:PP-nec

If this diagram were completed by an arrow from Vis-f to Vlooks-f such that the same
diagonal resultant arrow could represent both the composition of relations 2 and 3
and the composition of relation 1 and the newly supplied one, then category theo-
rists would say that the diagram commutes. And the arrow that needs to be supplied
to make the diagram commute they call the retraction of relation 1 through the
composition Res2:

Meaning-Use Diagram #5:


Composition and
Retraction

Retraction of 1
throughRes2
Vlooks- f Vis- f

Res2:PV 2,3
3:PV-suff 1:PV-suff

Plooks- f Pis- f
2:PP-nec

11
After composition, then, the next most complex form of resultant MUR is retrac-
tion. Analyzing the structure of Sellars’s pragmatist arguments against empiricism
requires recognizing the pragmatically mediated semantic relation he claims holds
between phenomenal and objective vocabulary as the retraction of a constellation
of more basic meaning-use relations.

IV. AUTOMATA: SYNTACTIC PV-SUFFICIENCY


AND VP-SUFFICIENCY

Now this is all extremely abstract. To make it more definite, we need to fill in (at
least) the notions of vocabulary, practice-or-ability, PV-sufficiency, and VP-suffi-
ciency, which are the fundamental elements that articulate what I am calling the
“meaning-use analysis” of resultant meaning-use relations—in particular, the prag-
matically mediated semantic relations between vocabularies that I am claiming we
must acknowledge in order to pursue the classical project of philosophical analysis
in the light of what is right about the pragmatist critique of it. We can begin to do
that by looking at a special case in which it is possible to be unusually clear and pre-
cise about the things and relations that play these metatheoretic roles. This is the
case where ‘vocabulary’ takes a purely syntactic sense. Of course, the cases we even-
tually care about involve vocabularies understood in a sense that includes their
semantic significance. But besides the advantages of clarity and simplicity, we will
find that some important lessons carry over from the syntactic to the semantic case.
The restriction to vocabularies understood in a spare syntactic sense leads to
correspondingly restricted notions of what it is to deploy such a vocabulary, and
what it is to specify practices-or-abilities sufficient to deploy one. Suppose we are
given an alphabet, which is a finite set of primitive sign types—for instance, the let-
ters of the English alphabet. The universe generated by that alphabet then consists
of all the finite strings that can be formed by concatenating elements drawn from
the alphabet. A vocabulary over such an alphabet—in the syntactic sense I am now
after—is then any subset of the universe of strings that alphabet generates. If the
generating alphabet is the English alphabet, then the vocabulary might consist of
all English sentences, all possible English texts, or all and only the sentences of
Making It Explicit.13
What can we say about the abilities that count as deploying a vocabulary in this
spare syntactic sense?14 The abilities in question are the capacity to read and write
the vocabulary. In this purely syntactic sense, ‘reading’ it means being able practi-
cally to distinguish within the universe generated by the alphabet, strings that do,
from those that do not, belong to the specified vocabulary. And ‘writing’ it means
practically being able to produce all and only the strings in the alphabetic universe
that do belong to the vocabulary.
We assume as primitive abilities the capacities to read and write, in this sense,
the alphabet from whose universe the vocabulary is drawn—that is, the capacity to

12
respond differentially to alphabetic tokens according to their type, and to produce
tokens of antecedently specified alphabetic types. Then the abilities that are PV-suf-
ficient to deploy some vocabularies can be specified in a particularly simple form.
They are finite-state automata (FSAs). As an example, suppose we begin with the
alphabet {a, h, o, !}. Then we can consider the laughing Santa vocabulary, which
consists of strings such as ‘hahaha!’, ‘hohoho!’, ‘hahahoho!’, ‘hohoha!’, and so on.15
Here is a graphical representation of a laughing Santa finite-state automaton, which
can read and write the laughing Santa vocabulary:

The Laughing Santa


Automaton

a
h o !
1 2 3 4
h

The numbered nodes represent the states of the automaton, and the alphabetically
labeled arcs represent state-transitions. By convention, the starting state is repre-
sented by a square (State 1), and the final state by a circle with a thick border (State
4).
As a reader of the laughing Santa vocabulary, the task of this automaton is to
process a finite string, and determine whether or not it is a licit string of the vocab-
ulary. It processes the string one alphabetic character at a time, beginning in State
1. It recognizes the string if and only if (when and only when) it arrives at its final
state, State 4. If the first character of the string is not an ‘h’, it remains stuck in State
1, and rejects the string. If the first character is an ‘h’, it moves to State 2, and
processes the next character. If that character is not an ‘a’ or an ‘o’, it remains stuck
in State 2, and rejects the string. If the character is an ‘a’ or an ‘o’, it moves to State
3. If the next character is an exclamation point, it moves to State 4, and recognizes
the string ‘ha!’ or ‘ho!’—the shortest ones in the laughing Santa vocabulary. If
instead the next character is an ‘h’, it goes back to State 2, and repeats itself in loops
of ‘ha’s and ‘ho’s any number of times until an exclamation point is finally reached,
or it is fed a discordant character.
As a writer of the laughing Santa vocabulary, the task of the automaton is to
produce only licit strings of that vocabulary, by a process that can produce any and
all such strings. It begins in its initial state, State 1, and emits an ‘h’ (its only avail-
able move), changing to State 2. In this state, it can produce either an ‘a’ or an ‘o’—
it selects one at random16—and goes into State 3. In this state, it can either tack on
an exclamation point, and move into its final state, State 4, finishing the process, or

13
emit another ‘h’ and return to State 2 to repeat the process. In any case, whenever it
reaches State 4 and halts, the string it has constructed will be a member of the
laughing Santa vocabulary.
I hope this brief rehearsal makes it clear how the constellation of nodes and
arrows that makes up this directed graph represents the abilities to read and write
(recognize and produce arbitrary strings of) the laughing Santa vocabulary.17 What
it represents is abilities that are PV-sufficient to deploy that vocabulary—that is, read
and write it, in the attenuated sense appropriate to this purely syntactic case. And
the digraph representation is itself a vocabulary that is VP-sufficient to specify those
vocabulary-deploying abilities. That is, the digraph representation of this finite-
state automaton is a pragmatic metavocabulary for the laughing Santa vocabulary.
The relation between the digraph vocabulary and the laughing Santa vocabulary is,
then, a pragmatically mediated—not now semantic, but syntactic—relation between
vocabularies.
It may seem that I am stretching things by calling the digraph form of repre-
sentation a ‘vocabulary’. It will be useful, as a way of introducing my final point in
the vicinity, to consider a different form of pragmatic metavocabulary for the
laughing Santa vocabulary. Besides the digraph representation of a finite-state
automaton, we can also use a state-table representation. For the laughing Santa
automaton this is:
State 1 State 2 State 3
a Halt 3 Halt
h 2 Halt 2
o Halt 3 Halt
! Halt Halt 4

In read mode, the automaton starts in State 1. To see what it will do if fed a partic-
ular character, we look at the row labeled with that character. The LSA will Halt if
the input string starts with anything other than an ‘h’, in which case it will change
to State 2. In that state, the automaton specified by the table will halt unless the next
character is an ‘a’ or an ‘o’, in which case it changes to State 3, and so on. (There is
no column for State 4, since it is the final state, and accepts/produces no further
characters.) Clearly there is a tabular representation corresponding to any digraph
representation of an FSA, and vice versa. Notice further that we need not use a two-
dimensional table to convey this information. We could put the rows one after
another, in the form:
aHalt3Halth2Halt2oHalt3Halt!HaltHalt4.
This is just a string, drawn from a universe generated by the alphabet of the LSA,
together with ‘Halt’ and the designations of the states of that automaton. The
strings that specify finite-state automata that deploy vocabularies defined over the
same basic alphabet as the LSA then form a vocabulary in the technical syntactic
sense we have been considering. And that means we can ask about the automata

14
that can read and write those state-table encoding vocabularies. The meaning-use
diagram for this situation is then:

Meaning-Use Diagram #6:


Specifying the Automaton
that Deploys the Laughing
Santa Vocabulary

VLaughing
Santa
Res: VV-1,2
1: PV-suff

VLSA State- 2: VP-suff PLaughing Santa


Table Automaton

3: PV-suff

PLSA State-Table
Automaton

V. THE CHOMSKY HIERARCHY AND A SYNTACTIC EXAMPLE OF


PRAGMATIC EXPRESSIVE BOOTSTRAPPING

Restricting ourselves to a purely syntactic notion of a vocabulary yields a clear sense


of ‘pragmatic metavocabulary’: both the digraph and the state-table vocabularies
are VP-sufficient to specify practical abilities articulated as a finite-state automaton
that is PV-sufficient to deploy—in the sense of recognizing and producing—the
laughing Santa vocabulary, as well as many others. (Of course, it does that only
against the background of a set of abilities PV-sufficient to deploy those vocabular-
ies.) Perhaps surprisingly, it also offers a prime example of strict pragmatic expres-
sive bootstrapping. For in this setting we can prove that one vocabulary that is
expressively weaker than another can nonetheless serve as an adequate pragmatic
metavocabulary for that stronger vocabulary. That is, even though one cannot say
in the weaker vocabulary everything that can be said in the stronger one, one can
still say in the weaker one everything that one needs to be able to do in order to
deploy the stronger one.
Here the relevant notion of the relative expressive power of vocabularies is also
a purely syntactic one. Already in the 1950s, Chomsky offered mathematical char-
acterizations of the different sets of strings of characters that could be generated by

15
different classes of grammars (that is, in my terms, characterized by different kinds
of syntactic metavocabularies) and computed by different kinds of automata. The
kinds of vocabulary, grammar, and automata lined up with one another, and could
be arranged in a strict expressive hierarchy: the Chomsky hierarchy. It is summa-
rized in the following table:

Vocabulary Grammar Automaton


Regular AÆaB Finite State
AÆa Automaton
Context-Free AÆ<anything> Push-Down
Automaton
Context-Sensitive c1Ac2Æc1<anything>c2 Linear Bounded
Automaton
Recursively Enumerable No Restrictions on Rules Turing Machine
(= 2 Stack PDA)

The point I want to make fortunately does not require us to delve very deeply into
the information summarized in this table. A few basic points will suffice. The first
thing to realize is that not all vocabularies in the syntactic sense we have been pur-
suing can be read and written by finite-state automata. For instance, it can be
shown that no finite-state automaton is PV-sufficient to deploy the vocabulary anbn,
defined over the alphabet {a,b}, which consists of all strings of any arbitrary num-
ber of ‘a’s followed by the same number of ‘b’s. The idea behind the proof is that in
order to tell whether the right number of ‘b’s follow the ‘a’s (when reading) or to
produce the right number of ‘b’s (when writing), the automaton must somehow
keep track of how many ‘a’s have been processed (read or written). The only way an
FSA can store information is by being in one state rather than another. So, it could
be in one state—or in one of a class of states—if one ‘a’ has been processed, another
if two have, and so on. But by definition, a finite-state automaton only has a finite
number of states, and that number is fixed in advance of receiving its input or pro-
ducing its output. Whatever that number of states is, and whatever system it uses
to code numbers into states (it need not be one-to-one—it could use a decimal
coding, for instance), there will be some number of ‘a’s that is so large that the
automaton runs out of states before it finishes counting. But the vocabulary in
question consists of arbitrarily long strings of ‘a’s and ‘b’s. In fact, it is possible to
say exactly which vocabularies finite-state automata (specifiable by digraphs and
state-tables of the sort illustrated above) are capable of deploying. These are called
the ‘regular’ vocabularies (or languages).
The next point is that slightly more complex automata are capable of deploy-
ing vocabularies, such as anbn, that are not regular, and hence cannot be read or
written by finite-state automata. As our brief discussion indicated, intuitively the
problem FSAs have with languages like anbn is that they lack memory. If we give
them a memory, we get a new class of machines: (nondeterministic18) push-down

16
automata (PDAs). In addition to being able to respond differentially to and pro-
duce tokenings of the alphabetic types, and being able to change state, PDAs can
push alphabetic values to the top of a memory-stack, and pull such values from the
top of that stack. PDAs can do everything that finite-state automata can do, but
they can also read and write many vocabularies, such as anbn, that are not regular,
and so cannot be read and written by FSAs. The vocabularies they can deploy are
called “context-free.” All regular vocabularies are context-free, but not vice versa.
This proper containment of classes of vocabularies provides a clear sense, suitable
to this purely syntactic setting, in which one vocabulary can be thought of as
“expressively more powerful” than another: the different kinds of grammar can
specify, and the different kinds of automata can compute, ever larger classes of
vocabularies. Context-free vocabularies that are not regular require more powerful
grammars to specify them, as well as more powerful automata to deploy them. FSAs
are special kinds of PDAs, and all the automata are special kinds of Turing
machines. Recursively enumerable vocabularies are not in general syntactically
reducible to context-sensitive, context-free, or regular ones. And the less capable
automata cannot read and write all the vocabularies that can be read and written
by Turing machines.
Nonetheless, if we look at pragmatically mediated relations between these syn-
tactically characterized vocabularies, we find that they make possible a kind of strict
expressive bootstrapping that permits us in a certain sense to evade the restrictions
on expressive power enforced for purely syntactic relations between vocabularies.
The hierarchy dictates that only the abilities codified in Turing machines—two-
stack push-down automata—are PV-sufficient to deploy recursively enumerable
vocabularies in general. But now we can ask: what class of languages is VP-sufficient
to specify Turing machines, and hence to serve as sufficient pragmatic metavocabu-
laries for recursively enumerable vocabularies in general? The surprising fact is that
the abilities codified in Turing machines—the abilities to recognize and produce
arbitrary recursively enumerable vocabularies—can quite generally be specified
in context-free vocabularies. It is demonstrable that context-free vocabularies are
strictly weaker in syntactic expressive resources than recursively enumerable vocab-
ularies. The push-down automata that can read and write only context-free vocab-
ularies cannot read and write recursively enumerable vocabularies in general. But
it is possible to say in a context-free vocabulary what one needs to be able to do in
order to deploy recursively enumerable vocabularies in general.
The proof of this claim is tedious, but not difficult, and the claim itself is not
at all controversial—though computational linguists make nothing of it, having
theoretical concerns very different from those that lead me to underline this fact.
(My introductory textbook leaves the proof as an exercise to the reader.19) General-
purpose computer languages such as Pascal and C++ can specify the algorithms a
Turing machine, or any other universal computer, uses to compute any recursively
enumerable function, hence to recognize or produce any recursively enumerable
vocabulary. And they are invariably context-free languages20—in no small part just

17
because the simplicity of this type of grammar makes it easy to write parsers for
them. Yet they suffice to specify the state-table, contents of the tape (or of the dual
stacks), and primitive operations of any and every Turing machine. Here is the
MUD characterizing this pragmatically mediated relation between syntactically
characterized vocabularies:
Meaning-Use Diagram #7:
Syntactic Pragmatic
Expressive Bootstrapping

VRecursively
Enumerable
Res1: VV-1,2
1: PV-suff

VContext- 2: VP-suff PTuring Machine


Free

3: PV-suff

PPush-Down
Automaton

I called the fact that context-free vocabularies can be adequate pragmatic metavo-
cabularies for recursively enumerable vocabularies in general ‘surprising’, because
of the provable syntactic irreducibility of the one class of vocabularies to the other.
But if we step back from the context provided by the Chomsky hierarchy, we can
see why the possibility of such pragmatic expressive bootstrapping should not, in
the end, be surprising. For all the result really means is that context-free vocabular-
ies let one say what it is one must do in order to say things they cannot themselves
say, because the ability to deploy those context-free vocabularies does not include
the abilities those vocabularies let one specify. Thus, for instance, there is no reason
that an FSA could not read and write a vocabulary that included commands such
as “Push an ‘a’ onto the stack,”—and thus specify the program of a PDA—even
though it itself has no stack, and could not do what the vocabulary it is deploying
specifies. A coach might be able to tell an athlete exactly what to do, and even how
to do it, even though the coach cannot himself do what he is telling the athlete to
do, does not have the abilities he is specifying. We ought not to boggle at the possi-
bility of an expressively weaker pragmatic metavocabulary having the capacity to
say what one must do in order to deploy an expressively stronger one. We should
just look to see where this seems in fact to be possible for vocabularies we care
about, and what we can learn from such relations when they do obtain.

18
VI. SEMANTIC EXAMPLES OF PRAGMATIC EXPRESSIVE
BOOTSTRAPPING AND FURTHER BASIC AND RESULTANT MURS

Let us recall what motivated this rehearsal of some elements of automaton theory
and introductory computational linguistics. I suggested that a way to extend the
classical project of semantic analysis so as to take account of the insights of its prag-
matist critics is to look analytically at relations between meaning and use. More
specifically, I suggested focusing to begin with on two in some sense complemen-
tary relations: the one that holds when some set of practices-or-abilities is PV-suf-
ficient to deploy a given vocabulary, and the one that holds when some vocabulary
is VP-sufficient to specify a given set of practices-or-abilities. The composition of
these is the simplest pragmatically mediated semantic relation between vocabular-
ies: the relation that holds when one vocabulary is a sufficient pragmatic metavo-
cabulary for another. It is a paradigm of the infinite, recursively generable class of
complex, pragmatically mediated semantic relations that I propose to lay alongside
the other semantic relations between vocabularies that have been investigated by
analytic philosophers (for instance those who address the core programs of empiri-
cism and naturalism): relations such as analyzability, definition, translation, reduc-
tion, truth-making, and supervenience. I suggested further that pragmatic
metavocabularies might be of particular interest in case they exhibited what I called
“expressive bootstrapping”—cases, that is, in which the expressive power of the
pragmatic metavocabulary differs markedly from that of the target vocabulary,
most strikingly, when the metavocabulary is substantially expressively weaker—a
phenomenon Tarski has led us not to expect for semantic metavocabularies, which
in general must be expressively stronger than the vocabularies they address.
We have now seen that all of these notions can be illustrated with particular
clarity for the special case of purely syntactically characterized vocabularies. The
abilities that are PV-sufficient to deploy those vocabularies, in the sense of the
capacity to recognize and produce them, can be thought of as various sorts of auto-
mata. There are several well-established, different-but-equivalent vocabularies that
are known to be VP-sufficient to specify those automata. In this special syntactic
case we can accordingly investigate the properties of pragmatic metavocabularies,
and when we do, we find a striking instance of strict expressive bootstrapping in a
pragmatically mediated syntactic relation between vocabularies.
Of course, the cases we really care about involve semantically significant vocab-
ularies. Are there any interesting instances of these phenomena in such cases? I have
indicated briefly how some of Sellars’s pragmatist criticisms of various ways of pur-
suing the empiricist program can be understood to turn on pragmatically mediated
semantic relations. And I mentioned Huw Price’s idea that although normative
vocabulary is not semantically reducible to naturalistic vocabulary, naturalistic
vocabulary might suffice to specify what one must do—the practices-or-abilities
one must engage in or exercise—in order to deploy normative vocabulary. Here is
another example that I want to point to, though I cannot develop the claim here.

19
For roughly the first three-quarters of the twentieth century, philosophers who
thought about indexical vocabulary took for granted some version of the doctrine
that a tokening n of an expression of the type ‘now’ was synonymous with, definable
or semantically analyzable as, ‘the time of utterance of n’, and similarly for ‘here’ and
‘the place of utterance of h,’ and so on. During the 1970s philosophers such as John
Perry, David Lewis, and G. E. M. Anscombe, by focusing on the use of indexicals in
modal and epistemic contexts, showed decisively that this cannot be right: what is
expressed by indexical vocabulary cannot be expressed equivalently by nonindexi-
cal vocabulary. This fact seems so obvious to us now that we might be led to won-
der what philosophers such as Russell, Carnap, and Reichenbach could have been
thinking for all those years. I want to suggest that the genuine phenomenon in the
vicinity is a pragmatically mediated semantic relation between these vocabularies.
Specifically, in spite of the semantic irreducibility of indexical to nonindexical
vocabulary, it is possible to say, entirely in nonindexical terms, what one must do in
order to be deploying indexical vocabulary correctly: to be saying essentially and
irreducibly indexical things. For we can formulate practical rules such as:

If, at time t and place <x,y,z>, speaker s wants to assert that some prop-
erty P holds of <x,y,z,t,s>, it is correct to say “P holds of me, here and
now.”
And
If a speaker s at time t and place <x,y,z> asserts “P holds of me, here and
now,” the speaker is committed to the property P holding of <x,y,z,t,s>.

Nonindexical vocabulary can serve as an adequate pragmatic metavocabulary for


indexical vocabulary. The fact that one nonetheless cannot say in nonindexical terms
everything that one can say with indexical vocabulary just shows that these vocabu-
laries have different expressive powers, so that the pragmatically mediated semantic
relation between them is a case of strict pragmatic expressive bootstrapping.
Here is another example. Besides pragmatically mediated semantic relations
between vocabularies, there is another sort of pragmatic analysis, which relates one
constellation of practices-or-abilities to another. It corresponds to another basic
meaning-use relation: the kind of PP-sufficiency that holds when having acquired
one set of abilities means one can already do everything one needs to do, in prin-
ciple, to be able to do something else. One concrete way of filling in a definite sense
of “in principle” is by algorithmic elaboration, where exercising the target ability just
is exercising the right basic abilities in the right order and under the right circum-
stances. (Of course, this is just one species of the genus of practical projection that
Wittgenstein brings to our attention.) As an example, the ability to do long division
just consists in exercising the abilities to do multiplication and subtraction accord-
ing to a particular conditional branched-schedule algorithm. The practical abilities
that implement such an algorithmic PP-sufficiency relation are just those exercised
by finite-state automata (in general, Turing machines). Indeed, automata should be

20
thought of as consisting in a definite set of meta-abilities: abilities to elaborate a set
of primitive abilities into a set of more complex ones, which can accordingly be
pragmatically analyzed in terms of or decomposed into the other.21
To get a usefully general concept of the PP-sufficiency of a set of basic abilities
for a set of more complex ones, we need to move beyond the purely syntactic
automata I have described so far. One way to do that is to replace their specialized
capacities to read and write symbols—in the minimal sense of classifying tokens as
to types and producing tokens of specified types—by more general recognitional
and productive capacities. These are abilities to respond differentially to various in
general nonsymbolic stimuli (for instance, the visible presence of red things), cor-
responding to reading, and to respond by producing performances of various in
general nonsymbolic kinds (for instance, walking north for a mile), corresponding
to writing. What practically implements the algorithmic elaboration of such a set
of basic differential responsive abilities is a finite state transducing automaton (and
its more sophisticated push-down brethren).

A Finite-State
Transducing
Automaton

S4:R6
S1:R7
2 5
1 _:R7 4
S7:_
3 S1:R3 6

S3:_

This is a diagram of an FSTA that has an initial set of stimuli to which it can
respond differentially, and an initial set of responses it can differentially produce.
And the diagram indicates that in its initial state, if presented with a stimulus of
kind 1, it will produce a response of kind 7 and shift to state 2, and if presented
instead with a stimulus of kind 7 it will produce no response, but will shift to state
3. It is important to note that although the recognitive and performative abilities
that such an automaton algorithmically elaborates are to be considered as ‘primi-
tive’ or ‘basic’ with respect to such elaboration, this does not mean that they are so
in any absolute sense. The stimulus-response formulation by itself does not keep us
from considering as ‘primitive’ capacities the abilities to keep ourselves at a suitable

21
distance from a conversational partner, distinguish cubist paintings done by Braque
from those done by Picasso, drive from New York to San Francisco, or build a house.
The notion of the algorithmic decomposability of some practices-or-abilities
into others suggests in turn a pragmatic generalization of the classical program of
artificial intelligence functionalism—which, though a latecomer in the twentieth
century, deserves, I think, to count as a third core program of classical semantic
analysis, alongside empiricism and naturalism. AI functionalism traditionally held
itself hostage to a commitment to the purely symbolic character of intelligence in
the sense of sapience. But broadening our concern from automata as purely syntac-
tic engines to the realm of transducing automata, we are now in a position to see
automaton functionalism as properly concerned with the algorithmic decompos-
ability of discursive (that is, vocabulary-deploying) practices-and-abilities. What I
will call the ‘pragmatic’ thesis of artificial intelligence is the claim that the ability to
engage in some autonomous discursive practice (a language game one could play
though one played no other) can be algorithmically decomposed into nondiscursive
abilities—where by “nondiscursive” abilities, I mean abilities each of which can in
principle be exhibited by something that does not engage in any autonomous dis-
cursive practice. (Without that restriction on the primitive abilities out of which
discursive ones are to be algorithmically elaborated, the claim would be trivial, since
the null algorithmic decomposition is also a decomposition.) The capacity to talk-
and-think as I am addressing it is the capacity to deploy an autonomous vocabu-
lary. But unlike classical symbolic AI, the pragmatic thesis of artificial intelligence
does not presume that the practical capacities from which some transducing
automaton can algorithmically elaborate the ability to engage in an autonomous
discursive practice must themselves consist exclusively of symbol-manipulating
abilities, never mind ultimately syntactic ones.22
The algorithmic practical elaboration model of AI gives a relatively precise
shape to the pragmatist program of explaining knowing-that in terms of knowing-
how: specifying in a nonintentional, nonsemantic vocabulary what it is one must
do in order to count as deploying some vocabulary to say something, hence as mak-
ing intentional and semantic vocabulary applicable to the performances one pro-
duces. In particular, it offers a construal of the basic claim of AI-functionalism as a
pragmatic expressive bootstrapping claim about computer languages as pragmatic
metavocabularies for much more expressively powerful autonomous vocabularies,
namely natural languages. The arguments for and against this pragmatic version of
AI-functionalism accordingly look quite different from those arrayed on the oppos-
ing sides of the debate about the prospects of symbolic AI.
Combining the notion of PP-sufficiency that holds between two constellations
of practices-or-abilities when one can be algorithmically elaborated from the other
with the two sorts of basic meaning-use relations out of which I previously con-
structed the notion of expressively bootstrapping pragmatic metavocabularies—
namely, a set of practices-or-abilities being PV-sufficient to deploy a vocabulary and
a vocabulary being VP-sufficient to specify a set of practices-or-abilities—makes it

22
possible to define further kinds of pragmatically mediated semantic relations. As
my final example, consider the relation between logical vocabulary—paradigmati-
cally, conditionals—and ordinary, nonlogical, empirical descriptive vocabulary. I
take it that every autonomous discursive practice must include performances that
have the pragmatic significance of assertions and inferences (which I would argue
come as an indissoluble package). I actually think this PP-necessary condition on
any practices PV-sufficient for autonomously deploying a vocabulary can usefully
be treated as sufficient as well—that is, as what distinguishes discursive practices as
such. But nothing in what follows turns on that further commitment. To count as
engaging in such practices, practitioners must exercise an ability, however fallible,
to assess the goodness of material inferences: to sort them into those they accept
and those they reject. This is part of what one must do in order to say anything. But
it is easy to say how those recognitional and performative abilities, for these purposes
counted as primitive, can be algorithmically elaborated into the capacity to use con-
ditionals. An algorithm VP-sufficient to specify an automaton that practically imple-
ments such a pragmatic elaboration or PP-sufficiency relation is the following:

• Assert the conditional ‘if p then q’ if one endorses the inference from p to q;
• Endorse the inference from p to q if one asserts the conditional ‘if p then q’.

These rules of usage codify introduction and elimination rules for the conditional.
So the capacity to use conditionals can be algorithmically elaborated from the
capacities to make assertions and assess inferences. This is the composition of a PP-
sufficiency relation with a PV-sufficiency relation, and is expressed in the following
meaning-use diagram:

Elaborating
Conditionals

Res1:VV 1–4
Vconditionals V1

1:PV-suff
4:PV-suff
2:PV-nec

PAlgEl3:PP-suff
Pconditionals Pinferring/
asserting
5:PV-suff PADP

VAlgorithm

23
The complex resultant meaning-use relation indicated by the dotted arrow at the
top of the diagram is a further pragmatically mediated semantic relation. The dia-
gram indicates exactly what constellation of sub-claims about basic meaning-use
relations must be justified in order to justify the claim that this relation obtains
between two vocabularies, and hence the diagram graphically presents a distinctive
kind of meaning-use analysis of that semantic relation.
In fact, if we think further about this example, by filling in another basic mean-
ing-use relation that obtains in this case, we can define an even more articulated
pragmatically mediated semantic relation between vocabularies. For when condi-
tionals are deployed with the practical circumstances and consequences of applica-
tion specified in the algorithm stated above, they let practitioners say what otherwise
they could only do; that is, they express explicitly, in the form of a claimable, hence
propositional, content, what practitioners are implicitly doing in endorsing some
material inferences and rejecting others. This is a VP-sufficiency relation: condi-
tionals let one specify the practices of taking-or-treating inferences as materially
good or bad. Adding in this explicating relation between conditionals and the prac-
tices-or-abilities they make explicit yields a new pragmatically mediated semantic
relation that conditionals stand in to every autonomously deployable vocabulary.
Its meaning-use diagram is this:

Elaborated-Explicating (LX)
Conditionals

Res1:VV 1–5
Vconditionals V1

5:VP-suff 1:PV-suff
4:PV-suff
2:PV-nec

PAlgEl3:PP-suff
Pconditionals Pinferring
6:PV-suff PADP

VAlgorithm

The practical capacity to deploy conditionals (that is, something PV-sufficient for
their use) both can be elaborated from practices PP-necessary for every ADP, and
explicates those practices (in the sense of being VP-sufficient for them). It is elabo-
rated-explicative relative to every autonomous vocabulary. We say, it is LX for every

24
AV, hence for every vocabulary (since the use of any vocabulary presupposes, and
in that sense is parasitic on, the capacity to use some autonomous vocabulary).
I believe that this complex resultant pragmatically mediated semantic relation
is important for understanding the distinctive semantic role played by logical
vocabulary generally: not just conditionals, but also negation (which makes explicit
a central feature of our practice of treating claims as materially incompatible), and
even modal vocabulary (which makes explicit a central feature of our practice of
associating ranges of counterfactual robustness with material inferences). In my ini-
tial characterization of the classical semantic project of philosophical analysis, I
pointed to the special status that is accorded to logical vocabulary in that project.
What I called “semantic logicism” is its commitment to the legitimacy of the strat-
egy of using logical vocabulary to articulate the semantic relations between vocab-
ularies that is its goal—paradigmatically in connection with the core projects of
empiricism, naturalism, and functionalism. One interesting way to vindicate that
commitment (that is, at once to explain and to justify it) would be to appeal to the
fact that logical vocabulary is elaborated from and explicating of every autono-
mously deployable vocabulary whatsoever. For that means that the capacity to use
logical vocabulary is both in this very clear and specific sense implicit in the capac-
ity to use any vocabulary, and has the expressive function of making explicit some-
thing already present in the use of any vocabulary.
I won’t say anything more here about how such a vindication might proceed,
contenting myself with the observation that insofar as there is anything to an
account along these lines, supplementing the traditional philosophical analytical
concern with semantic relations between the meanings expressed by different kinds
of vocabulary by worrying also about the pragmatic relations between those mean-
ings and the use of those vocabularies in virtue of which they express those mean-
ings is not so much extending the classical project of analysis as unpacking it, to
reveal explicitly a pragmatic structure that turns out to have been implicit in the
analytic semantic project all along. For the conclusion will be that it is because some
vocabularies are universal pragmatically elaborated and explicitating vocabularies
that semantic analysis of the logicist sort is both possible and legitimate at all. I
don’t claim to have entitled myself to that conclusion here, only to have introduced
some conceptual machinery that might make it possible to do so—and so at least
to have sketched a way in which the insights of the pragmatist tradition can be
assembled and developed so as to be constructively helpful to, rather than destruc-
tively critical of, the classical project of philosophical semantic analysis, and so to
open the way to extending that project in promising new directions.

NOTES

1. In this usage, the logicism about mathematics characteristic of Frege’s Grundgesetze and Russell
and Whitehead’s Principia is semantic logicism about the relations between mathematical and log-
ical vocabularies.

25
2. This argument occupies roughly the first half of his classic “Empiricism and the Philosophy of
Mind” (reprinted by Harvard University Press, 1997). His critique of the phenomenalist version
of empiricism can be found in “Phenomenalism,” in his collection Science, Perception, and Reality
(Routledge Kegan Paul, 1963).
3. “Pragmatics and Pragmatisms,” in James Conant and Urszula M. Zeglen (eds.), Hilary Putnam:
Pragmatism and Realism (Routledge, 2002), translated as “Pragmatik und Pragmatismus,” 29–58,
in M. Sandbothe (ed.), Die Renaissance des Pragmatismus (Velbrück Wissenschaft, 2000).
4. Ibid.
5. Cf. Quine’s remark (in “Two Dogmas of Empiricism”): “Meaning is what essence becomes when
it is detached from the thing and attached to the word.”
6. I am indebted for this way of thinking of Wittgenstein’s point to Hans Julius Schneider’s penetrat-
ing discussion in his Phantasie und Kalkul (Suhrkamp, 1992).
7. A patient and detailed investigation of the mechanisms of this phenomenon in basic descriptive
and scientific concepts, and an extended argument for its ubiquity can be found in Mark Wilson’s
exciting and original Wandering Significance (Oxford University Press, 2006).
8. I would be happy if those who dance with his texts find affinities here with Hegel’s insistence that
the metaconceptual categories of Verstand must be replaced by those of Vernunft. It is character-
istic of his philosophical ambition that draws the opposite of Wittgenstein’s conclusions from an
appreciation of the dynamics of conceptual development and its sensitivity to arbitrary contin-
gent features of the practitioners, devoting himself to elaborating what he insists is the logic of
such processes and the conceptual contents they shape.
9. For the purposes of the present project, I will maintain a studied neutrality between these options.
The apparatus I am introducing can be noncommittal as to whether we understand content-con-
ferring uses of expressions in terms of social practices or individual abilities.
10. Somewhat more precisely: some theory (a set of sentences) formulable in the vocabulary in ques-
tion, is such that if all those sentences are true of some interlocutor, then it thereby counts as exer-
cising the relevant ability, or engaging in the relevant practices.
11. See his “Naturalism without Representationalism,” in Mario de Caro and David Macarthur (eds.),
Naturalism in Question (Harvard University Press, 2004), 71–90. Price calls his view “subject nat-
uralism,” as opposed to the more traditional (and more metaphysical) “object naturalism.”
12. I discuss this argument in greater detail in the final chapter of Tales of the Mighty Dead (Harvard
University Press, 2004.)
13. Computational linguists, who worry about vocabularies in this sense, have developed metalan-
guages for specifying important classes of such vocabularies: the syntactic analogues of semantic
metalanguages in the cases we will eventually address. So, for instance, for the alphabet {a,b}, ‘anbn’
characterizes the vocabulary that comprises all strings of some finite number of ‘a’s followed by
the same number of ‘b’s. ‘a(ba)*b’ characterizes the vocabulary that comprises all strings begin-
ning with an ‘a’, ending with a ‘b’, and having any number of repetitions of the sub-string ‘ba’ in
between.
14. Here we can safely just talk about abilities, without danger of restricting the generality of the
analysis.
15. In the syntactic metalanguage for specifying vocabularies that I mentioned in the note above, this
is the vocabulary (ha/ho)*!
16. As a matter of fact, it can be shown that every vocabulary readable/writeable by a nondetermin-
istic finite-state automaton—such as the laughing Santa automaton—is also readable/writeable
by a deterministic one. (ref.)
17. For practice, or to test one’s grip on the digraph specification of FSAs, consider what vocabulary
over the same alphabet that produces the laughing Santa is recognized/produced by this automa-
ton:

26
The “I’ll Have What She’s
Having”Automaton

o h
2
o
1 !
a 4 5
h
a 3

18. By contrast to FSA’s, there need not in general be for every vocabulary computable by a nonde-
terministic PDA, some deterministic PDA that reads and writes the same vocabulary.
19. Thomas A. Sudkamp, Languages and Machines, 2nd ed. (Addison Wesley Longman, 1998), ch. 10.
20. In principle. There are subtleties that arise when we look at the details of actual implementations
of particular computer languages, which can remove them from qualifying as strictly context-free.
21. There are various vocabularies that are VP-sufficient for specifying those meta-abilities. Specifying
them in terms of the differentially elicitable capacities to change state and to store and retrieve
symbols is just one of them.
22. For this reason, the frame problem, as it is often formulated, does not immediately arise for the
pragmatic version of AI-functionalism. But as is explored in the third chapter of Between Saying
and Doing, it does get a grip, at a different point.

27
PHILOSOPHICAL TOPICS
VOL. 36, NO. 2, FALL 2008

Varieties of Deflationism

Paul Horwich
New York University

The traditional view of truth—still widely endorsed—involves three assumptions.


(i) Truth is a property; some beliefs and statements exemplify it and some don’t.
(ii) It’s a substantive property, in that we can reasonably expect an account of what
truth is, of its underlying nature. And (iii) this account should provide explanations
of various important things about truth: including why the methods appropriate
for its detection are what they are and why we are well advised to pursue it—that
is, to strive for true belief.
Deflationism is a reaction against this traditional picture. It was provoked, to
begin with, by the inability of philosophers to come up with a good account meet-
ing the above constraints. Amongst other ideas, we tried truth as correspondence
with fact, as coherence, as provability, as utility, and as consensus; but they all
turned out to be defective in one way or another.1 In light of this series of failures
some of us began to question whether we’d had any reason in the first place to
assume that there must be some such account—that truth must be a substantive
property whose nature will rationalize our epistemic methods and explain the
desirability of their goal.
The outcome of these reflections—the deflationary perspective on truth—has
been articulated in various different specific forms: there are various competing
deflationary theories of truth. So we must confront the question of which of them

29
is best. And my main objective in this review is to settle that question—to examine
the pros and cons of these alternatives, and to recommend a particular one of them.
But before we become immersed in these details I want to do two things: first, to be
more explicit about what the competing proposals have in common—to say more
about what makes a theory of truth deflationary; and second, to indicate something
of the broad philosophical significance of deflationism by demonstrating its vital
bearing on what can and should be said about the nature of meaning.

II

On the first point: deflationary theories are characterized by a cluster of four inter-
locking ideas about the truth predicate:—one at the level of pragmatics, concern-
ing its function; another at the level of semantics, concerning the way in which its
meaning is fixed; a third at the level of metaphysics, concerning what sort of prop-
erty, if any, the truth predicate stands for, and what the fundamental facts are con-
cerning that property; and a fourth at the level of philosophical import, concerning
the alleged theoretical profundity of the notion it expresses. More specifically, the
four-pronged deflationary position is as follows:

First: the word “true” has an idiosyncratic conceptual function, a special kind of
utility. What exactly that function is said to be varies from one deflationary account
to another: the truth predicate is taken to be primarily a device of emphasis (“It’s
true!”), or concession (“True, I was a bit drunk; but I did see a flying saucer”), or
generalization (“Every statement of the form, “p Æ p”, is true”), or anaphora
(“Deflationism is an elegant theory”—“True”). But, despite this divergence of opin-
ion, there is agreement that we must distinguish “true”’s raison d’être from that of
other terms, and that we must especially beware against assimilating it to empirical
predicates—such as “red”, “tree”, and “magnetic”—whose utility resides in their role
in prediction and causal explanation.

Second: the non-predictive non-explanatory role of the truth-predicate implies that


its meaning is not empirical—i.e. its deployment is not based on a rule that
instructs us to accept
(x)(x is true ´ x is F)

where “F” articulates some observable content. Nor does it abbreviate a complex
non-empirical expression—so that a definition of the standard explicit type would
still explain its usage. Rather, the central principle governing our overall deploy-
ment of the truth predicate is, very roughly speaking, that each statement articu-
lates the conditions that are necessary and sufficient for its own truth. In other
words, we are required to accept that the statement made by “Dogs bark” is true just
in case dogs bark, and so on. It is some such basic principle of use that provides the

30
truth predicate with its distinctive meaning and which (as we shall see) equips it to
perform its distinctive function.

Third: insofar as “true” does not have the role and meaning-constituting use of an
empirical predicate, we can appreciate a priori that there will be no reductive analy-
sis of truth to empirical (i.e. naturalistic) properties. Similarly, we can see that the
fundamental facts about truth will not be natural laws relating it to empirical phen-
omena. The most that can be expected, by way of theory, is a systematization—per-
haps a mere list—of the superficial facts to which we are committed by the
meaning of the truth predicate: for example, the fact that if dogs bark then it is true
that they do, and so on.2 This is what is meant by saying that the property of truth
is not ‘substantive’.

And fourth: in light of the foregoing, truth is not, as often assumed, a deep concept
and should not be given a pivotal role in philosophical theorizing. It cannot be the
basis of our conceptions of meaning, or of justification, or of logic. Its actual philo-
sophical significance is somewhat back-handed—lying in the fact that so many
problems are exacerbated by our failure to appreciate its rather mundane function
and the peculiar definition—i.e. principle of use—which enable this function to be
fulfilled.3

III

Let me illustrate the import of the deflationary perspective by sketching its rele-
vance to the theory of meaning:

(i) According to deflationism, the basic rule we follow in our use of “true” is to
apply it to a statement, s, when we take s to have the same content (i.e. meaning) as
something we are already disposed to assert. For example, we agree that “Schnee ist
weiss” expresses a truth, because we know that it means the same as a sentence of
our own that we accept. And we apply it to that ‘home’ sentence (i.e. “Snow is
white”) because it (trivially) has the same content as “Snow is white”, which we
accept. This practice with the word, “true”, insofar as it is explanatorily basic with
respect to our overall use of it, is what engenders our possession of the concept of
TRUTH. But it presupposes that we be able to recognize ‘sameness of meaning’. Thus
the concept, MEANING, is more fundamental than the concept, TRUTH. We must
therefore renounce the orthodox assumption that sentence meanings are to be ana-
lyzed in terms of truth conditions.4

(ii) Insofar as the truth-theoretic notions (TRUTH, BEING TRUE OF, and REFERENCE) are
not naturalistic—insofar as they stand for properties and relations that are neither
reducible to non-semantic (causal) phenomena, nor even nomologically correlated

31
with them—these notions cannot be amongst the central concepts of an empirical
causal/explanatory theory. In particular, no adequate scientific semantics can be
truth-theoretic. Thus the current methodology of linguistics becomes questionable.5

(iii) If—as in the traditional inflationary conception—BEING TRUE OF were a natu-


ral (potentially reducible) concept, graspable independently of the concept of
MEANING, then, since ‘w means F Æ w is true of the fs’ (e.g., ‘w means DOG Æ w is
true of the dogs’), any decent account of w’s meaning would have to explain its
truth-theoretic import. More specifically, we would be entitled to suppose that a
given underlying non-semantic property of word w—say, U(w)—constitutes w’s
meaning F only if the possession of U by w would explain why that word is true of
exactly the fs. However, satisfaction of this requirement would be no trivial matter:
U(w) would have to take the relational form, ‘w bears relation R to the fs’.6 De-
flationism, in contrast, implies no such constraint. It leaves open the possibility of
a much broader range of meaning-constituting properties—including internalistic
inferential roles.7

(iv) The just-mentioned inflationary constraint on meaning-constituting proper-


ties may well prove so severe that nothing can satisfy it. Perhaps (as argued by
Kripke) no relation R can be found that relates “dog” to the dogs, “electron” to the
electrons, “number” to the numbers, etc. In which case one might be forced to con-
cede that there can be no such thing as meaning. So a further virtue of deflation-
ism is that it immunizes us against this skeptical threat.8

IV

Thus it matters a great deal whether the general deflationary perspective is correct
or not. So let’s consider various specific deflationary proposals in order to deter-
mine whether any of them is defensible. I take the most prominent and promising
to be:

1. The redundancy theory, whereby “The proposition that p is true” merely


rearticulates the meaning of “p”.9
2. The minimalist theory, whereby we mean what we do by the truth predi-
cate in virtue of the fact that our overall use of it is explained by our
propensity to accept instances of the material equivalence schema, “The
proposition that p is true ´ p”.10
3. Tarski’s theory, whereby truth, for the sentences of a given language, is
defined in terms of reference within that language, which is in turn
defined by a mere list of what its words refer to. (Note that Tarski is quite
clear that the meaning of the expression, “true in English”, is implicitly

32
fixed by a disquotation schema. The purpose of his further theorizing is
to put this idea into the form of a ‘respectable’, explicit definition).11
4. The sentence-variable theory, whereby “x is true” is analyzed as
“($p)(x=<p> & p)”.12
5. The prosentential theory, whereby “it is true” is a kind of dummy sen-
tence, or sentence variable, whose relation to other sentences is similar to
the relationship between pronouns and other singular terms.13
6. The disquotation theory, whereby our concept of truth for sentences (at
least for the context-insensitive part of our own language) is captured by
the strong equivalence of ““p” is true” and “p”.14

These accounts meet my four conditions for being deflationary. First, their advo-
cates typically stress some distinctive, mundane function of our truth predicate.
Second, any theoretical work that might be given to it is severely confined by this
view of its function. Third, they each take the fundamental (meaning-constituting)
use of the word to be (very roughly speaking, and each in its own way) the assump-
tion that every statement articulates its own truth condition. And fourth, these the-
ories concur in denying that the property of truth has any naturalistic reductive
analysis, and in denying that its character may be captured by principles relating it
to empirical phenomena.
So much for the intension, extension, and philosophical interest of the con-
cept, ‘deflationary view of truth’. Let me now compare and contrast the accounts
that fall under it, and explain why I think that minimalism is the most attractive of
them. My plan will be to consider the competitors one at a time, briefly stating what
I take their relative advantages and disadvantages to be.

According to the redundancy theory, “p” and “It is true that p” and “The proposi-
tion that p is true” say exactly the same thing. No meaning is added by “It is true
that . . . ” or by “The proposition that . . . is true”; so they are redundant. This
implies that the logical form of, for example
The proposition that Fido is a dog is true

is simply
Dog[Fido]

which in turn implies that the English truth predicate (like the English existence
predicate) is not a ‘logical predicate’—i.e. the underlying logical formalization of a
truth attribution will involve no predicate expressing the concept of TRUTH.
But in that case we are unable to explain the validity of the following inference
schema, whose deployment is vital to the term’s utility:

33
X is true Ed’s claim is true
X = <p> Ed’s claim is that Fido is a dog
—————— ——————————————-
\ <p> is true \ That Fido is a dog is true
—————— ——————————————-
\p \ Fido is a dog

For in order that the first part hold—as an instance of Leibniz’ Law, “a=b Æ (Fa ´
Fb)”—it is necessary that the truth predicate be a logical predicate.

VI

Suppose, in order to rectify this crippling defect, we make a small adjustment to the
redundancy theory—continuing to claim that the concept of truth is captured by
the equivalence of “<p> is true” and “p”, but deploying a weaker notion of equiva-
lence—no longer synonymy, but merely an a priori known and metaphysically nec-
essary material equivalence. We thereby arrive at minimalism. More precisely, the
minimalist contention is that our meaning what we do by “true” is engendered by
our underived acceptance of the schema, “<p> is true ´ p” (from which we may
then infer its necessity and apriority). Given this revision of the redundancy the-
ory, the word “true” can be treated as a logical predicate; and so the sorts of infer-
ence that are essential (if there is going to be any point in having a truth predicate)
will be evidently valid.15

VII

Now suppose that someone, while fully sympathizing with this line of thought,
nonetheless aspires to provide an explicit definition of truth. Is there some way of
reformulating the content of minimalism so that it will take that desired shape?
This was Tarski’s problematic, and his answer was yes. He recognized that it
will certainly not do simply to say
x is true º [x = <dogs bark> and dogs bark; or
x = <pigs fly> and pigs fly; or
. . . and so on]

since this would be an ill-formed gesturing towards an infinitely long statement. But
he was then forced, for the sake of finiteness, to focus on the truth of the sentences of
specific languages. Because only for such a notion—i.e. “true in language L”—might
it be feasible to derive each of the infinitely many conditions for its application
from a finite base:—namely, of principles concerning the referents of the finitely
many words in L. Note that it was not an option for Tarski to give an account of

34
how the truth-values of propositions depend on the referents of their components,
because there are infinitely many such components (i.e. possible basic concepts)
whose referents would each require specification. Rather, his only reasonable strat-
egy was to offer a separate account for each language—an account that lists the ref-
erents of each of the words in the language and the principles determining how the
referents (and truth-values) of complex expressions depend on them. Such axioms
could then, with a considerable logical ingenuity, be re-jigged into the form of an
explicit definition.
This is an elegant result; but the costs are heavy:

• In the first place, we don’t get an account of the notion that initially
puzzled us:—namely, truth as applied to what we believe, to the things
our utterances express.

• In the second place, even if we decided to aim instead at an account of


sentential truth—i.e. to aim at capturing what we mean by “expresses a
truth of L”—the Tarskian definitions are clearly deficient. For amongst
the features of one’s actual understanding of this expression that will not
be captured by a Tarskian definition are (a) that it is based, composition-
ally, on what is meant by “expresses”, “true”, “in L”, etc., and on the way
these terms are combined; (b) that it does not presuppose a prior under-
standing of every one of L’s words; and (c) that insofar as a notion of ref-
erence is presupposed by it, that notion will not be definable by a mere
list of what each word of L happens to refer to.

• And in the third place, it’s only for a limited range of formal languages
that Tarski-style explanations of truth in terms of word-reference can
unquestionably be supplied. There are notorious difficulties in giving
such a treatment of attributive adjectives, belief attributions, condition-
als, and many other natural language constructions.16

Supposing, for these reasons, that we judge Tarski’s account not to be good enough.
Is there any alternative explicit definition that will articulate his insight that what
must be conveyed is no more or less than the equivalence of “<p> is true” and “p”?

VIII

Well there is the view that I called the sentence-variable analysis—first articulated
by Frank Ramsey and urged these days by Christopher Hill and Wolfgang Künne:
x is true ≡ ($p)(x=<p> & p)

35
Evidently, the variable, ‘p’, and quantifier, ‘$p’, that are deployed on the right-hand
side are non-standard. Unlike the normal existential quantifier, which binds variables
that occur in nominal positions and which formalizes the word “something”, this
one binds variables that occur in sentence positions and does not appear to corre-
spond to any of the expressions of a natural language.
So one cause for concern is whether these notions are coherent. We might well
be troubled by an inability to say what the variables range over, and by their use to
quantify into opaque ‘that’-clauses.—And there is no easy escape from these diffi-
culties in simply supposing that the new variables and quantifiers are substitutional.
For substitutional quantifiers are defined in terms of truth:—“{$p}( . . . p . . . )”
means “The result of substituting a sentence for ‘p’ in ‘. . . p . . .’ is true”—and there-
fore cannot be used to explain what truth is.
But even if we set aside questions of coherence, there is the further objection
that our actual understanding of the truth predicate cannot be built on notions that
we don’t yet have—and it’s plausible, for a couple of reasons, that sentential quan-
tifiers and variables don’t already exist either in natural languages or in languages
of thought.
In the first place, we can’t find any clear cases of them. For example, it would
be implausible to suppose that when we say
For any way things might be, either they are that way or they are not

what we are actually thinking is


(p)(p v -p)

since this would miss a great deal of the original’s structure. Surely a better account
of what we have in mind is given by the straightforwardly faithful
(x)(y)[(x is an n-tuple of things & y is an n-adic property) Æ (x has y
or x does not have y)]

And in the second place, if we did have the non-standard variables and quantifiers in
question, then our actual deployment of the truth predicate would be very surpris-
ing. For the generalizations that it enables us to formulate (e.g. “Everything of the
form, ‘pv-p’, is true”) could be articulated more directly in terms of our sentential
variables. Why would there not be a natural language way of saying “(p)(pv-p)”?
This is not to deny that one might now decide to introduce the new apparatus,
and then proceed to formulate a definition of “true” in terms of it. But it can hardly
be supposed that such a definition would constitute our present understanding of the
truth predicate, since we don’t yet have the terms and notions that are needed for it.
Against this point it might be protested that it is not really a requirement on a
definition that it be psychologically accurate—that it fully capture our pre-existing
basis for using the word. Consider, for example Frege’s innovative definitions of the
numerals in terms of sets, or his definition of the ancestral relation in terms of sec-
ond-order predicate logic.17

36
However, the moral here is that we must distinguish two conceptions of ‘defi-
nition’, corresponding to two things that we might be trying to do with such a
thing. One kind is descriptive, the other revisionary; one attempts to articulate what
is already meant, the other specifies and recommends a new meaning. In these
terms, my objection to the sentence-variable analysis of truth was that it does not
do the first job—it is not descriptively accurate. But nor does it have any evident
value as a suggestion about what we ought to mean by “true”. In order to justify
such a revision, we would need to identify some defect in our present notion—a
defect that would be remedied by replacing it with the proposed new one; or else
we would have to have some independent rationale for introducing the new forms
of variable and quantifier in terms of which an equally useful, new notion of truth
could easily be defined. But no such case has been made. And, even if it were to be
made, we should not forget that the suggested revision would inevitably bypass our
main original problem, which was to demystify truth as we actually conceive of it.
Suppose then, in light of the failure of either Tarski’s account or the sentence-
variable analysis to provide a plausible definition that is both explicit and deflation-
ary, we face up to the fact that there was never any reason in the first place to expect
the truth predicate’s meaning to be fixed by an explicit definition.18 Where does that
leave us? Well, as we have seen, one possibility is to embrace minimalism, whereby
the truth predicate applies to propositions, and its meaning is constituted, not by
any explicit definition, but by our underived acceptance of the equivalence schema.
However, perhaps something better than minimalism can be found?

IX

For example, there is the prosentential point of view (initially proposed by Grover,
Camp, and Belnap19), whereby the word “true” is a meaningless part of the dummy
sentence, “it-is-true”, which, like a pronoun, gets its content from something in the
context of utterance. Thus the “He” in “He was Austrian” can be used to refer to
Wittgenstein; and the “it” in “If a triangle has equal sides then it has equal angles”
is a bound variable—formally, “(x)(Sx Æ Ax)”. And, allegedly, the function of “it-
is-true” is analogous:—it can be intended to have the same truth-value as some-
one’s preceding remark that dogs bark. Moreover the “it-is-true” in “Everything is
such that if Mary stated that it-is-true, then it-is-true” may be treated as a bound
variable—formally, “(p)((Mary stated that p) Æ p)”.
This approach clearly has a close affinity with the just-mentioned sentence-
variable analysis. But instead of supposing that the word “true” is a genuine logical
predicate, explicitly definable in terms of sentential variables and quantifiers, the
prosententialist says that the English word “true” is not a genuine logical predicate,
since it is an undetachable part of a sentence variable. As we have seen, the under-
lying logical form of “Everything Mary said is true” is supposed to be “(p)((Mary

37
said that p) Æ p)”, which contains no predicate corresponding to the word “true”.
—Its role was merely to help us to construct the prosentence “it-is-true”, repre-
sented formally by the letter “p”.
Thus the prosententialist denies the claim I made, in objecting to the sentence-
variable analysis, that we don’t actually deploy the needed variables and quantifiers.
Her view is that we are in fact deploying them in our use of the truth predicate. But
of course, this response cannot help to salvage that analysis. For the sentence-vari-
ables that it invokes are composed from the word “true”, and therefore cannot qual-
ify as a suitable basis in terms of which to define it. Thus prosententialism is a quite
distinct point of view.
However, it is no less problematic. After all, at least the sentence-variable analysis
—to its credit—respects the strong indications that our truth predicate possesses
its own definite meaning, and that it has the inferential role of an ordinary predi-
cate. But prosententialism goes against this evidence and to that extent is unappeal-
ingly unnatural.
Focusing for now on the orthodox version of the idea (as developed mainly by
Dorothy Grover20), another difficulty is that most of our uses of the truth predicate
do not occur in the context of the supposedly canonical phrase, “it-is-true”.
Consider, for example:
(a) Goldbach’s Conjecture is true

and
(b) All Mary’s statements are true

In order for orthodox prosententialism to handle such cases they must be regarded
as transformations of underlying sentences in which all the occurrences of the truth
predicate do occur in the context of “it-is-true”—for example:
(a*) Something is such that Goldbach’s Conjecture is the proposition
that it-is-true, and it-is-true

(b*) Everything is such that if Mary stated that it-is-true, then it-is-true

But notice that the quantifiers deployed here—namely, “Something” and “Every-
thing”—could not be construed, as they must be in ordinary English, as binding
the subsequent occurrences of “it”, but would have to be construed, rather, as bind-
ing the entire prosentence, “it-is-true”. Therefore the theory is open to one of the
same objections that were made to the sentence-variable analysis—namely, that
since we don’t actually deploy these non-standard variables and quantifiers, no
account of truth tied to those notions can possibly be an account of our present
concept.

38
X

A further criticism of the orthodox approach, leveled by Robert Brandom, is that


the quantificational regimentations suggested for cases like “Goldbach’s Conjecture
is true” diverge to an implausible degree from their natural language formulations.
And this concern leads him to propose a somewhat different version of prosenten-
tialism.21
According to his view (which he calls “the anaphoric theory”) “it-is-true” is far
from the only prosentence. Rather, whenever “X” picks out a content (i.e. proposi-
tion), “X-is-true” is a prosentence. Within that context “X” does not have its nor-
mal (referential) function—namely, to identify X as the object of predication—but
serves instead to specify the truth-value of the entire prosentence: namely, that its
truth-value is the same as X’s. Supposedly, the sentence “Goldbach’s Conjecture is
true” does not (as one might naively think) pick out a certain arithmetical propo-
sition and predicate truth of it. Rather, it is a prosentence that is stipulated to have
the same truth-value as Goldbach’s Conjecture—i.e. whatever truth-value is pos-
sessed by the proposition that every even number is the sum of two primes.
However, both of our initial pair of objections to orthodox prosententialism
count equally against Brandom’s new (‘anaphoric’) version:—

1. The alleged underlying form of (for example) “All Mary’s claims were
true” as “Everything is such that if Mary claimed it-is-true, then it-is-
true”) is implausible, given that “Everything”, as presently understood,
does not govern sentence positions.22

2. “True” looks and acts and smells like a predicate! Therefore, insofar as the
stated rationale for Brandom’s innovation is taken seriously (as it cer-
tainly should be)—insofar as a high premium is placed on minimizing, as
far as reasonably possible, differences between grammatical and logical
forms—then one should be unhappy, from the very outset, at the theory’s
unwillingness to treat “true” as an ordinary meaningful predicate.23

Moreover, it seems to me that Brandom’s modification of prosententialism


ought to be regarded as a significant retreat from the initial idea.
In the first place, the motivating analogy with pronouns has been weakened.—
For there is no such thing as a complex pronoun in which one component specifies
the referent of the whole.
And, in the second place, the substance of his central proposal is that
<X is true> is stipulated to be equivalent to X

But this is tantamount to the minimalist claim:—namely, that we accept, under-


ived, instances of the schema

39
<p> is true ´ p

Thus Brandom’s attempt to improve prosententialism pushes it in the direction of


minimalism.

XI

Finally, we must take a hard look at the disquotational approach, suggested by


Quine and championed these days by Hartry Field. The advantage of this approach,
according to its proponents, is that it has no need for certain ‘creatures of dark-
ness’—namely, propositions. For the truth predicate is applied, they say, to sentence-
tokens and its core meaning is given (for sentences of one’s home language, and
uttered in one’s local context) by the equivalence of ““p” is true” and “p”. But precisely
this alleged advantage may well be regarded as a disadvantage. For the ordinary truth
predicate is primarily applied to such things as ‘what John believes’, ‘what we are
supposing for the sake of argument’, ‘what Mary was attempting to express’, and so
on:—that is, not to sentences, but to propositions. Therefore, insofar as we want to
demystify our concept of truth—to clarify the meaning of our truth predicate—the
disquotational approach is simply a non-starter.24
Not only that; but once we take into account how the approach will have to be
developed if it is to be adequate even as a treatment of sentential truth, then its ini-
tial promise of dispensing with ‘creatures of darkness’ pretty much disappears. For
the pure disquotation schema holds only insofar as the two “p”’s are replaced by
sentences with the same meaning. Moreover, in order to account for applications of
the truth predicate to utterances made in contexts that differ from one’s own, or
that are couched in foreign languages, then disquotationalism needs a ‘projection’
principle, along the lines of
u has the same meaning as my “p” Æ
(u is true ´ my “p” is true)

which, when combined with the pure disquotation schema, yields


u has the same meaning as my “p” Æ
(u is true ´ p)

So it turns out that disquotationalism cannot, in the end, avoid entanglement with
the notion of meaning. Granted, we haven’t yet shown that it needs to go all the way
to the positing of propositions. But given the possibility of introducing these things
by means of the schema
u has the same meaning as my “p” Æ
(u expresses the proposition that p)

that further step is small and innocuous. For assuming that our disquotationalist is
already prepared to countenance some abstract objects (e.g. numbers, or sets, or

40
word-types), then the abstractness of propositions cannot be what bothers him. The
only reasonable worry could be over the prospect of supplying coherent identity
conditions for such entities. But, as we have just seen, his disquotational theory can
be adequate only insofar as sense can be made of the locution, “u has the same
meaning as “p””—which implies that there is a coherent statement of such condi-
tions.25 Thus the alleged benefit—the conceptual economy—that was supposed to
compensate for the abandonment of our current concept of truth turns out to be
non-existent.

XII

Let me summarize what I hope to have accomplished in this review.


First, I’ve tried to articulate the general deflationary perspective on truth—to
say what features qualify an account of truth as ‘deflationary’.
Second, I’ve tried to illustrate the broad philosophical significance of deflation-
ism by indicating its impact on our theorization of meaning. It creates space for
non-truth-conditional, non-relational accounts (such as Brandom’s normative
inferentialism, and my own propensities-of-use theory), and thereby protects
meaning from the threat of Kripke-style skepticism.
And third, I’ve tried to show that minimalism is the best of the competing spe-
cific theories of truth that fall under the deflationary umbrella. I recognize that the
situation here somewhat resembles the competition between subtly different reli-
gious sects or between political factions that strike outsiders as barely worth distin-
guishing. But, as someone with a dog in this race, I can’t deny that I think it matters
which particular deflationary account of truth is the right one. Still, I admit that the
main virtue of identifying a defensible specific theory is that we thereby show that
the general deflationary perspective is defensible. For that is what has the impor-
tant philosophical ramifications.26

NOTES

1. For discussion of these defects see R. L. Kirkham, Theories of Truth (Cambridge, Mass.: MIT Press,
1992); P. Engel, Truth (Chesham, Bucks: Acumen Publishing Limited, 2002); and W. Künne,
Conceptions of Truth (Oxford: Oxford University Press, 2003).
2. For arguments in support of this claim, see my Truth, 2nd edition (Oxford: Oxford University
Press, 1998), 50–51.
3. Recent philosophical literature is replete with conflicting proposals about the right way to distin-
guish between ‘deflationary’ and ‘non-deflationary’ theories. Now one might suspect that, since
the term at issue here is a piece of jargon whose meaning is to be stipulated, there would be no
possibility of any rational way of deciding between these alternative proposals. But that is not so.
The merits of each candidate may be determined by reference to two considerations: first, whether
the suggested basis of classification is deep and interesting; and second, whether “deflationism”

41
would be an apt label for that basis. On the first point, I believe it is indicative of the theoretical
depth of the proposal just advanced (a) that its four criteria correlate so well with one another,
and (b) that the issue of whether or not they are satisfied has considerable philosophical import
(e.g. for theories of meaning). And on the second point, the word “deflationism” nicely captures
the idea that truth is less profound and potent than has traditionally been assumed, and (as a con-
sequence) that it does not require the sort of theoretical characterization that has traditionally
been sought. I am not claiming that these considerations amount to a demonstration that my def-
inition of “deflationism” is best. Indeed, given the vagueness of the terms (such as “interesting”
and “appropriate”) in which this matter is to be judged, there are likely to be other reasonable defi-
nitions. Note, however, that even if some other characterization of ‘deflationism’ turns out to be
preferable to mine, that could not bear on the plausibility and significance of the specific (‘mini-
malist’) account of truth that I advocate below.
4. For further discussion, see “A Defense of Minimalism” (response to Objection 9)—chapter 3 of
Truth-Meaning-Reality (Oxford: Oxford University Press, 2010).
The conflict between deflationism and truth conditional semantics is emphasized by
Michael Dummett. See, for example, his “Truth,” Proceeding of the Aristotelean Society, 59 (1959):
141–62.
5. See my “Semantics: what’s truth got to do with it?”—Chapter 8 of Truth-Meaning-Reality, op. cit.
6. For example: ‘We are, in ideal conditions, disposed to apply w to a thing just in case that thing is
an f ’. For further discussion see “Kripke’s Paradox of Meaning”—chapter 6 of Truth-Meaning-
Reality, op. cit.
7. See “Regularities, Rules, Meanings, Truth Conditions, and Epistemic Norms”—chapter 7 of Truth-
Meaning-Reality, op. cit.; and see chapter 3 of my Reflections on Meaning (Oxford: Oxford
University Press, 2005).
8. See Saul Kripke’s Wittgenstein on Rules and Private Langauge (London: Blackwell, 1982), esp.
22–23. And see “Kripke’s Paradox of Meaning,” op. cit.
9. P. Strawson, “Truth,” Proceedings of the Aristotelian Society, Supp. Vol. 24 (1950): 129–56.
10. See my Truth, op cit.
11. A. Tarski, “The Concept of Truth in Formalized Languages,” Logic, Semantics, Metamathematics:
Papers from 1923 to 1938 (Oxford: Oxford University Press, 1958), 152–278; A. Tarski, “The
Semantic Conception of Truth,” Philosophy and Phenomenological Research 14 (1943/44): 341–75.
12. F. Ramsey, “Facts and Propositions,” Proceedings of the Aristotelian Society, Supp. Vol. VII (1927):
153–70; F. Ramsey, “The Nature of Truth,” Episteme 16 (1991): 6–16 [On Truth: Original
Manuscript Materials (1927–1929) from the Ramsey Collection at the University of Pittsburgh, ed.
N. Rescher and U. Majer]; A. N. Prior, Objects of Thought (Oxford: Clarendon Press, 1971), esp.
24–38; C. S. Hill, Thought and World (Cambridge: Cambridge University Press, 2002); W. Künne,
Conceptions of Truth (Oxford: Oxford University Press, 2003).
Note that “<p>“ is an abbreviation of “the proposition that p”.
13. D. Grover, J. Camp, and N. Belnap, “A Prosentential Theory of Truth,” Philosophical Studies 27
(1975): 73–125. R. Brandom, Making It Explicit (Cambridge, Mass.: Harvard University Press,
1994), esp. ch. 5.
14. W. O. Quine, Pursuit of Truth (Cambridge, Mass.: Harvard University Press, 1990); S. Leeds,
“Theories of Reference and Truth,” Erkenntnis 13 (1978): 111–29; H. Field, “Deflationist Views of
Meaning and Content,” Mind 94, no. 3 (1994): 249–85.
15. Another way of modifying the Redundancy Theory that will address the above objection is to for-
mulate it as the schema, ‘<X is true> = X’ (where “X” can be replaced with any proposition-des-
ignating term). For, in that case, one could reason that <Ed’s claim is true> = Ed’s claim = <Fido
is a dog>. And then, given the first premise (‘Ed’s claim is true’), and given the rule ‘(<p>=<q>
& p) \ q’, one could infer ‘Fido is a dog’.
But a person might believe that the proposition designated by “X” is true (e.g. on the basis
of testimony) without knowing which proposition that is—i.e. without being able to articulate it.
So the modified Redundancy Theory can be plausible only relative to a coarse-grained, non-
Fregean conception of proposition. With respect to a Fregean proposition, XF, the import of the
modified Redundancy Theory is merely that XF and <XF is true> are strongly equivalent (but not
identical). So again we arrive at the minimalist theory.

42
16. For further discussion, see “A Minimalist Critique of Tarski”—chapter 5 of Truth-Meaning-
Reality, op. cit. That discussion considers, in addition, the semantic (‘liar’) paradoxes, and shows
how solution-strategies, based on the notion of grounding, which have usually been associated
with Tarski-style compositional truth-theories, can also be taken over by non-compositional the-
ories such as minimalism.
17. Thanks to John MacFarlane for pressing me on this point.
18. It is argued by some philosophers (following A. Gupta, “A Critique of Deflationism,” Philosophical
Topics 21 [1993]: 57–81) that an explicit definition is needed in order to explain our acceptance
of generalizations about truth (such as “Every instance of ‘If p, then p’ is true”). I attempt to rebut
this argument on pp. 137–38 of Truth (2nd edition), and in “A Minimalist Critique of Tarski,” op.
cit.
19. See their paper, “A Prosentential Theory of Truth,” op. cit.
20. See the papers collected in her book, A Prosentential Theory of Truth (Princeton: Princeton
University Press, 1992).
21. See chapter 5 of his Making It Explicit (Cambridge, Mass.: Harvard University Press, 1994); his
“From Truth to Semantics: A Path Through Making It Explicit,” Philosophical Issues 8 Truth
(1997): 141–54; and his “Explanatory vs. Expressive Deflationism about Truth,” in Richard
Schantz (ed.), What Is Truth? (Berlin: de Gruyter, 2002), 103–19.
22. Brandom acknowledges that minimalistic/disquotational theories can handle quantified uses of
“true” (such as “Everything Mary said is true”), but thinks that “. . . these uses are incorporated in
a more natural way in the prosentential account, by means of the analogy between lazy and quan-
tificational uses of pronouns” (“From Truth to Semantics,” fn. 2).
But I can’t agree. The fact that pronouns have two non-unifiable uses makes the best theory
of them regrettably disjointed. And so it is to be hoped, for the sake of theoretical economy, that
such terms are few and far between. By contrast the minimalist account of truth extends seam-
lessly to quantified cases in virtue of the general inferential roles of “some” and “all”. Only one
principle is needed: <p> is true ´ p.
23. Another pro-prosententialist consideration that Brandom cites (—see “From Truth to Semantics”)
is that he has a simple response to Paul Boghossian’s argument (in “The Status of Content,” The
Philosophical Review [April 1990]) that deflationism is incoherent (—since some stronger notion
of truth is needed in order to mark the distinction between robust properties and merely defla-
tionary ones). For the prosententialist thesis is that there is no property of truth—a thesis whose for-
mulation evidently does not call on the above distinction between kinds of property.
But minimalism has an equally simple response. For it says merely that truth is fully cap-
tured by the equivalence schema. Again, there’s no call for any further notion of truth. Granted,
the minimalist may go on to observe that truth is not a ‘substantive’ property, i.e. that it has no
underlying nature (—NB the third prong of my initial characterization of deflationism). But no
inflationary notions are needed to make this point.
24. Unlike other disquotationalists, neither Quine nor Field would be greatly perturbed by the pres-
ent point. For they advance the doctrine in a revisionary spirit, aiming to improve upon our ordi-
nary concept of truth and not to capture it.
25. This is not to deny that it will sometimes (perhaps often) be indeterminate whether or not a given
foreign utterance, u, has the same meaning as a given local sentence, “p”. But that would not count
at all against the idea that some proposition is expressed by u. The implication would merely be
that there is an indeterminacy in which one it is—an indeterminacy in whether it is the proposi-
tion expressed by our “p”, or the one expressed by our “q”, etc.
26. This essay is based on material drawn from my entry, “Truth,” in The Oxford Handbook of Con-
temporary Philosophy, ed. Frank Jackson and Michael Smith (Oxford: Oxford University Press,
2005), 454–68. However, the critique of prosententialism to be found there has been substantially
revised and expanded for the present version. And the discussion here of deflationism’s bearing
on theories of meaning is entirely new. I am indebted to Frank Jackson, John MacFarlane, Ed
Minar, and Stephen Neale for their helpful suggestions about how to improve on a draft of the
earlier piece. And I would also like to thank those who attended and discussed my presentation of
this material at the 2007 Brandom Meetings in Prague.

43
PHILOSOPHICAL TOPICS
VOL. 36, NO. 2, FALL 2008

Comment on Lecture One

John McDowell
University of Pittsburgh

In this first lecture, Robert Brandom gets his project under way partly by putting it
in a context in the history of analytic philosophy. His avowed aim is to breathe new
life into a kind of philosophical activity that took shape, in analytic philosophy, as
the classical project of analysis.
In its classical form, as Brandom sees things, analytic philosophy aimed to cast
light on semantic relations between vocabularies. The exemplary activity of ana-
lytic philosophers, in his account, was directed toward showing how we can make
sense of vocabularies thought to be in some way problematic in terms of vocabu-
laries thought not to be problematic in that way.
In putting things like that, Brandom is aiming at a formulation that fits a num-
ber of different ways of doing philosophy. The idea of making sense of one vocab-
ulary in terms of another is meant to cover a great range of semantic relations (or
supposed semantic relations) between vocabularies, all the way from synonymy to
just having the same subject matter. The one constant, in the philosophical pro-
grams Brandom means his description to cover, is that they give logic a privileged
position in specifying the semantic relations between vocabularies that they appeal
to. This is what Brandom calls “semantic logicism.” In the kind of analytic philo-
sophical activity he has in mind, we are supposed to reassure ourselves about some
vocabulary we were finding problematic by engaging in “logical elaboration” of
some vocabulary we were already comfortable with.

45
In Brandom’s historical narrative, now, this classical analytic tradition comes
under challenge from a methodological pragmatism, which aims to displace the
notion of meaning from the center of philosophical attention in favor of the notion
of use, the notion of what people do with language.
At its most extreme, the thrust of the pragmatist challenge as Brandom reads
it is that the whole project of theorizing about semantic relations between vocabu-
laries should be scrapped. But he aims to disarm the challenge. A fundamental
pragmatist insight is that expressions have their significance only within a practice
of using them in certain ways. And Brandom proposes to co-opt that insight in the
service of a transformed version of the project of analysis.
If we have a systematic interest in the semantics of a vocabulary, the pragma-
tist insight requires us to pursue that interest in the context of the question what
practice one must engage in to count as using the vocabulary with the significance
it has. Answering that question in a particular case would be affirming a practice-
vocabulary relation.
Brandom calls this kind of relation “PV-sufficiency.” This label might raise
some eyebrows, given that he speaks of “what one must do in order to count as say-
ing what the vocabulary lets practitioners express,” which seems to fit a practice that
is necessary, rather than sufficient, for a vocabulary to have its semantic character.
But in this context a condition identified as necessary is a sufficient condition. If
one uses a vocabulary as one must for it to have a certain significance, then the
vocabulary has that significance. And Brandom needs sufficient conditions for what
he goes on to do with the idea.
Besides PV-sufficiency, which relates a practice to a vocabulary, we can also
consider what vocabulary suffices for specifying a practice, perhaps one that is PV-
sufficient for another vocabulary. Answering this second question in a particular
case gives a vocabulary-practice relation, a case of VP-sufficiency.
And now we can combine relations of these two kinds, PV-sufficiency and VP-
sufficiency, into a vocabulary-vocabulary relation: the relation that holds between
two vocabularies when the first enables one to specify a practice that suffices for the
second to have the significance it has. When this resultant vocabulary-vocabulary
relation holds, the first vocabulary is, in Brandom’s terms, a pragmatic metavocab-
ulary for the second.
This relation between vocabularies, a VV-sufficiency relation, is the first and
simplest example of the kind of relation Brandom wants to draw attention to: prag-
matically mediated semantic relations between vocabularies. It, and its kin, are
resources for his new version of the project of analysis. We can still say the project
of analysis aims to display semantic relations between vocabularies. But in doing
that, we can now exploit a new kind of semantic relation. The very idea of seman-
tic relations of this new kind incorporates the thought that drives the pragmatist
challenge, the thought that vocabularies have their semantic character only as used.
Surely, Brandom suggests, that disarms the challenge. These semantic relations can
figure in a version of the project of analysis that does not conflict with the funda-
mental pragmatist insight.

46
II

I yield to no one in my admiration for the creativity and resourcefulness with which
Brandom elaborates this vision of a pragmatically inflected descendant of the clas-
sical project of analysis. But I want to express skepticism about whether breathing
new life into the classical project should seem a good idea. (I am not sure to what
extent it is necessary for what he goes on to do in these lectures.) By framing his
proposal in what are in effect such terms, Brandom risks making his new creature
look like a sort of Frankenstein monster, enabled to stumble about in a semblance
of life by dint of the grafting of new organs, donated by pragmatism, into some-
thing that would otherwise have been a corpse, and should have been allowed to
stay that way.
Brandom’s exemplary instances of the classical project of analysis are what he
identifies as two core programs in it, empiricism and naturalism. These programs
reflect two different selections, or—since each comes in multiple versions—two dif-
ferent styles of selection, of supposedly unproblematic vocabulary, in the interest
of a thought with this shape: philosophy needs to vindicate the semantic innocence
of all other vocabulary by displaying it as standing in suitable semantic relations to
some selected base vocabulary, or, failing that, to consign all documents containing
such non-base vocabulary to the flames.
Versions of empiricism, in the relevant sense, work with base vocabularies that
are supposed to be unproblematic by virtue of their capacity to capture the imme-
diate deliverances of experience, on some conception of what it might be to do that.
Versions of naturalism work with base vocabularies that are supposed to be
unproblematic by virtue of their capacity to figure in giving expression to thoughts
that are candidates for being validated by the natural sciences.
Now empiricism is dead, at least in the guise of a program for reassuring us
about vocabularies we were finding problematic by displaying their semantic rela-
tions to a vocabulary that is supposedly unproblematic because it captures the
immediate deliverances of experience. Sellars has put paid to any empiricism of that
kind (not necessarily single-handedly). Such a program could seem to be a starter
only if one thought the base vocabulary could be viable on its own, not needing to
be part of a wider vocabulary that includes the vocabulary one is supposed to
explain by way of logical elaboration from the base vocabulary. If we can make
sense of the base vocabulary only by seeing it as part of a vocabulary that already
includes the target vocabulary, there is no go in the idea that the intelligibility of the
supposed base vocabulary fits it to cast light, by logical elaboration, on the target
vocabulary. And that is just how things are, according to Sellars, with any candidate
for the role of base vocabulary for the empiricist program. Sellars argues that there
could not be a linguistic practice whose only moves are reports of immediate obser-
vation. The capacity to make reports of observation, Sellars argues, presupposes a
capacity to give expression to knowledge that cannot be seen as an immediate deliv-
erance of experience. If that is right, it must be wrong to think we can be entitled

47
to take in our stride a base vocabulary for the empiricist program while still need-
ing to be reassured about other vocabularies. The empiricist core program responds
to a supposed philosophical predicament in which we understand an empiricist
base vocabulary but do not yet understand any other vocabulary. But there is no
way to make sense of this supposed predicament.
Brandom cites Sellars as one of his heroes, and of course he acknowledges the
Sellarsian thesis that there could not be a practice restricted to reports of observation.
In passing, let me say that I think he misrepresents the argument Sellars gives
for the thesis in “Empiricism and the Philosophy of Mind.” As Brandom reads him,
Sellars argues that the forms of words used in observation reports count as having
conceptual content only insofar as, besides being usable in noninferential claims,
which is what reports of observation are, they can also figure as premises for infer-
ences, so that besides the noninferential claims made in observation reports there
must also be inferential claims. But that is not how Sellars argues. Sellars argues that
the knowledge expressed in an observation report cannot be conceived as acquired
all by itself, atomistically, by undergoing the observation that the report expresses.
Sellars urges that acquiring knowledge by observation presupposes other knowl-
edge, including knowledge of a sort that could not be expressed in observation
reports.
In his account of what happens in “Empiricism and the Philosophy of Mind,”
Brandom reads into Sellars his own overestimation of the power of an inferential-
ist conception of meaning to do philosophical work, and the result is that he obscures
an epistemological dimension of Sellars’s thought. But for my present purposes I
can leave this complaint on one side. Brandom is right that Sellars undermines the
idea of a stand-alone practice of making observation reports. The question how
Sellars does that is not important here.
Brandom mentions this strand in Sellars in a context in which he is consider-
ing locally corrosive pragmatist challenges to the project of analysis. And it is true
that Sellars’s thesis demolishes only the empiricist core program. Sellars’s thesis
does not address the naturalist core program, or any other analytic activity. In that
sense Sellars’s challenge is indeed local. But it is still true that the empiricist core
program is, along with the naturalist core program, central to the way Brandom
introduces the very idea of the project of analysis. And by Brandom’s own lights, a
project partly defined in terms of the empiricist core program does not look like a
promising candidate to have new life breathed into it.
What about the naturalist core program? Versions of the naturalist program
are actively pursued in contemporary philosophy. But that much is true of the
empiricist program. However, the sheer fact that there are philosophers who engage
in a certain activity does not by itself establish the credentials of the activity. And
of course Brandom would not suggest it does. If we are invited to join in pursuing
an analytic program, we need to look critically at the supposed grounds for the
privileging of a vocabulary that defines the program, the supposed grounds for
regarding the selected vocabulary as unproblematic in some way in which other

48
vocabularies are supposedly problematic. If the grounds do not pass muster, we
should decline the invitation. And Brandom does not think we should accept the
privileging that characterizes the naturalist program. In this lecture, he mentions
Huw Price’s proposal for a naturalistic treatment of normative vocabulary, but only
as an illustration for the idea of bootstrapping, and he carefully refrains from
endorsing it himself.
So by Brandom’s own lights neither of the two core programs in terms of
which he introduces the idea of the project of analysis is a promising candidate to
be something that might, in a new shape, figure in a reinvigorated form of the proj-
ect, exploiting not just the old inventory of semantic relations between vocabular-
ies but also Brandom’s new kind, pragmatically mediated semantic relations.
It may seem beside the point to harp on this. After all, the two core programs
Brandom exploits in his introduction of the project of analysis are only examples,
and it is reasonable to say they belong specifically to the classical version of the proj-
ect. It would of course be a gross misunderstanding to think Brandom is urging us
to engage in pragmatically inflected descendants of those core programs.
But the project of analysis he is recommending is like the core programs of the
classical project in this respect: it privileges one vocabulary over others. When one
opposes those core programs, one is opposing the relevant claims of privileged status
for one vocabulary as against others. And there is room for a general suspicion of
all such claims. A thought that is central to pragmatism in the guise in which
Richard Rorty has embraced it can be put like this: given that there are several
vocabularies, all with some currency, it is a mistake for philosophers to think they
should be in the business of discriminating among them, selecting one to be the
basis on which all the rest are to be made intelligible. If that is a mistake in general,
why should we suppose it is any less a mistake if the selected vocabulary is language
for describing linguistic practices than if the selected vocabulary is language for
stating the deliverances of experience or language for giving expression to bits of a
naturalistic worldview?
Besides the two core programs, Brandom also mentions, as high points in the
classical project of analysis, the attempt to reduce arithmetic to logic, and the the-
ory of descriptions. Frege had to acknowledge that his logicism about arithmetic
failed. But his project yielded achievements that stand firm in spite of that failure.
He devised new ways, beyond those available to the logicians whose work he super-
seded, to use logical vocabulary in order to make explicit what certain claims com-
mit one to, so that inferences to those consequential commitments could be
codified in a form that required no leaps of intuition. And that is quite analogous
to what the theory of descriptions aims to do.
In these two cases it would be a stretch to talk of logical elaboration from a
privileged vocabulary. There is indeed logical elaboration, from a vocabulary usable
to express significances that, for these purposes, need not be conceived as internally
structured to a vocabulary usable to express significances that we can see as built
out of those simple contents. But that does not amount to privileging the simple

49
vocabulary. There is, if you like, privileging of the logical vocabulary that combines
with the simple vocabulary to make fully explicit the significance of the vocabulary
analyzed in terms of the simple vocabulary. But this privileging of logical vocabu-
lary amounts to no more than accepting that logical vocabulary is uniquely well
suited for making logical structure explicit. How could that be open to dispute?
This is not a case of the kind of essentially contestable privileging of one vocabu-
lary over others that figures in the core programs of empiricism and naturalism,
and still figures in Brandom’s new version of the project of analysis.

III

I asked why we should suppose that privileging a vocabulary for saying what speakers
do with words is any better than the kind of privileging that is characteristic of the
empiricistic program or the naturalistic program. I suggested that Rortyan prag-
matism recommends a general suspicion of all such claims of privilege. Now I had
better acknowledge that there may be a ready response to that suggestion. Those
Rortyan suspicions attach to singling out vocabularies on such grounds as that they
capture how things really are, or that they capture foundations for our knowledge
of how things really are. And that is not the kind of privilege that is supposed to
attach, in Brandom’s analytic pragmatism, to vocabularies for specifying linguistic
practices. Brandom’s favored vocabularies do not owe their claim on us to their
being supposedly closer to the language in which the world describes itself or the
language in which the world immediately makes itself known to us. So perhaps
Brandom’s privileging is not vulnerable to Rortyan strictures.
But what that indicates is at most that the privileging Brandom envisages need
not be suspect in the way I was suggesting. It does not positively recommend his
privileging. What does?
In this lecture, he wants to generate a positive presumption in favor of privi-
leging vocabularies for describing the use of language, on the ground that it reflects
an insight of what he calls “methodological pragmatism”: in particular, an insight
that can be credited to the later Wittgenstein. He acknowledges that his way of read-
ing Wittgenstein is risky, in that he finds support in Wittgenstein for a kind of
undertaking Wittgenstein does not engage in himself or encourage in others. I have
to say that I think Brandom’s treatment of Wittgenstein’s later philosophy largely
misses its point.
Brandom finds in Wittgenstein a general picture of linguistic practice accord-
ing to which language is polymorphous (a “motley”), and indefinitely open to new
ways of using words. So far, I think, so good.
But Brandom represents the anti-systematic slant of Wittgenstein’s philosophy
as a conclusion Wittgenstein draws, idiosyncratically, from that picture of language.
(He says: “Those who are moved by the pragmatist picture generally accept the par-

50
ticularist, quietist, anti-theoretical conclusions Wittgenstein seems to have drawn
from it.”) Thus he purports to respect the fundamentals of Wittgenstein’s approach
to language while refusing to take the supposed inferential step from the basic pic-
ture to the thought that we ought not to be aiming at anything systematic. He must
be inviting us to regard that thought as an inessential excrescence in Wittgenstein’s
thinking. On these lines he aims to represent his own systematic approach to lan-
guage as accommodating, indeed inspired by, the main insight of Wittgenstein’s
approach.
But this gets things backwards. Wittgenstein’s so-called quietism is not a con-
clusion disputably inferred from a general pragmatist picture that encapsulates the
substance of Wittgenstein’s philosophy of language. On the contrary, the “quietism”
is the whole point.
I doubt that Wittgenstein would find much interest in the style of philosophy
exemplified by the core programs of the classical analytic project. His characteristic
concern is not a frame of mind in which the meanings expressed in some singled-
out region of language are taken to be unproblematic while the meanings expressed
in other regions are taken to be problematic, and thus thought to need illumina-
tion in terms of the supposedly unmysterious base. His characteristic concern,
rather, is a frame of mind in which meaningfulness überhaupt is experienced as a
mystery, whether in the guise of a word being a word for a thing, of a signpost
pointing in a certain direction, or of a specification of a number series determin-
ing what it is correct to write down when one reaches a certain point in extending
the series. The target frame of mind can express itself in a superstitious—specifically,
fetishistic—conception of the bearers of meaning, according to which minds, con-
ceived as seats of remarkable powers, can somehow, perhaps magically, invest mere
inanimate things with properties that seem to make sense only as attributed to ani-
mate beings, so that an arrow’s pointing, say, takes on the appearance of “a hocus-
pocus which can be performed only by the soul” (Philosophical Investigations §454).
And Wittgenstein’s response, which I have to reduce to its bare essentials for the pur-
poses of a quick sketch, is to suggest that when we fall into this kind of puzzlement
about how a mere inanimate thing can have such properties, we are forgetting that
the meaningful thing we are thinking about—a word, a signpost, a specification of a
number series—is what it is by virtue of its involvement in a human practice. If we
do not forget that context, we shall not be tempted to think we are dealing with
mere inanimate things mysteriously infused with powers that properly belong to
souls. Ideally, that enables us to take in stride the idea of words for things, or sign-
posts that indicate directions, or specifications of number series. That is, the felt
mystery, in principle, evaporates, though of course that cannot happen in as short
a time as it takes to describe it.
Now it needs to be stressed that this kind of move, in which we remind our-
selves of the human context in which, say, signposts are what they are, does not
require us to describe what people do in the relevant practice without using the
concept of, to stay with the same example, a signpost: something that points in a

51
certain direction. There is nothing in Wittgenstein to suggest that alleviating puzzle-
ment about the concept of an inanimate thing that points the way requires us to
describe a practice within which that concept has application, but without using
the concept itself in our description of the practice. The mystery about how a mere
inanimate thing can indicate a direction evaporates when we remind ourselves of
the familiar human practice in which signposts are used. Signposts are not mere
inanimate things, in the sense that figures in the puzzlement; on the contrary, they
are signposts. And there is no reason to avoid using the idea of something that
points in a certain direction when we state the content of this reminder. Nothing
here recommends aiming to give an analysis of the concepts that figure in the style
of philosophical puzzlement Wittgenstein is concerned with—not even an analysis
that is pragmatically mediated.
Brandom notes that the interest of his analytic project is sensitive to the vocab-
ulary in which we specify a practice PV-sufficient for some vocabulary we are aim-
ing to cast light on. But he does not dwell on the question whether we can expect
to be able to do that in a way that permits something we can conceive as analysis of
the target vocabulary. And the question is surely crucial for the prospects of his new
project of analysis.
In specifying a practice PV-sufficient for a vocabulary, we might simply use,
in second intention, expressions that belong to the vocabulary whose use we are
describing. (“You have to use the word ‘person’ in such a way as to be able to be
understood to be saying things in which the concept of a person figures.”) If we
use such terms in describing how a vocabulary needs to be used to have its signifi-
cance, we do not produce something with a counterpart, now in a pragmatically
inflected version, of the kind of interest that was supposed to be possessed by
achievements in the classical project of analysis. We do not contribute to anything
that would deserve to be called “a project of analysis.” And I have been urging that
Wittgenstein’s response to philosophical anxiety about meaningfulness überhaupt
contains nothing to encourage analytic aspirations for pragmatic metavocabularies.
In some areas it may be possible to do better than that by analytic standards.
But it is not really clear why we should think “better by analytic standards” implies
“better, period.” Surely whether a pragmatic analysis of a target vocabulary would
be a good thing to have would depend on what, if any, legitimate puzzlements there
are about the target vocabulary. Brandom does not here consider such questions.
Note that Sellars’s argument that there could not be a stand-alone practice of
making observation reports does not depend on describing the practice otherwise
than as the practice of making observation reports. Sellars’s argument does not
need to exploit something that could be turned into a pragmatist analysis of the
contents, or kinds of content, expressible in the practice.

52
IV

In this lecture Brandom mentions a putative example of pragmatist analysis that I


think is open to question, and I shall end by questioning it.
It used to be common to explain indexical vocabulary in terms of nonindexi-
cal vocabulary, on a pattern exemplified by saying that a tokening of “I” refers to the
speaker of the utterance that includes itself. More recent work has made it obvious
that nothing like synonymy is in question here; so obvious that, as Brandom says,
the question arises what the philosophers who offered such explanations can have
been thinking. Brandom suggests they were ineptly registering a pragmatically
mediated semantic relation between indexical and nonindexical vocabulary. Their
real point was that we can say, entirely in nonindexical terms, what one must do to
be deploying indexical vocabulary correctly.
Brandom offers to do that for indexicals in general. But I shall restrict atten-
tion, for simplicity, to the case of the first person. Brandom’s proposal about this is
that there is a practical rule on these lines: if a speaker s wants to assert that some
property P holds of s, it is correct for s to say “P holds of me.”
But that seems wrong. It is correct, in the only sense that seems relevant, for s
to say “P holds of me” only if what he wants to assert is that P holds of himself,
where “himself ” is the kind of reflexive that, as Anscombe explains, can be under-
stood only as an oratio obliqua counterpart of the first person. If that is right, an
accurate formulation of the correctness condition is after all not intelligible inde-
pendently of understanding the first-person form. That is, it is not intelligible inde-
pendently of understanding the relevant region of indexical vocabulary. Suppose s
wants to say that P holds of s without wanting to say the property holds of himself,
as he may if he does not know that s is himself. His wanting to say that P holds of s
does not make it relevantly correct for him to say “P holds of me.” Admitttedly, if
he were to say “P holds of me,” he would be saying something that would be true
just in case the property does hold of what, on our hypothesis, he wants to say the
property holds of. But saying “P holds of me” would not be a correct way for him
to say what, on the hypothesis, he wants to say, namely that P holds of s. This is just
the point on which the classical analysis goes wrong; it seems to undermine
Brandom’s sketch of a pragmatist analysis too.
There is an obvious answer to the question Brandom raises, the question what
Russell, Carnap, and Reichenbach can have been thinking. Their nonindexical for-
mulations give, correctly enough so far as this goes, conditions under which the
indexical utterances they are concerned with are true. Their mistake was to take that
relation between the nonindexical formulations and the indexical utterances to be
something on the lines of synonymy. It is not clear that Brandom’s recasting of their
claims in terms of a pragmatically mediated relation between vocabularies is any
improvement.

53
PHILOSOPHICAL TOPICS
VOL. 36, NO. 2, FALL 2008

Brandom’s Demarcation of Logic

John MacFarlane
University of California, Berkeley

I find Robert Brandom’s proposal for the demarcation of logic very exciting.1 I’ll
try to explain why. Then I’ll mention a few things I still find puzzling about the pro-
posal, in the hopes that Brandom can clarify them.

In my dissertation (which I wrote under Brandom’s supervision), I argued that in


order to understand the confused state of contemporary debates about the demar-
cation of logic, one has to go back to Kant.2 Following tradition, Kant thought of
logic as a normative discipline, with the job of identifying norms for thought. On
this broad construal, it makes sense to talk of (say) the logic of jurisprudence, or of
geometrical thinking, or of biological thinking. Kant called these “special logics.”
But in addition to all the special logics, which provide “rules of correct thinking as
regards a certain kind of object,” Kant recognized a “general” logic, which provides
rules for thought as such, regardless of its objects (and even regardless of whether
it has an object).
When Kant made his critical turn, the notion of general logic posed a serious
problem for him. He held that concepts could have representational content (objec-
tive validity) only insofar as they applied to some object that could be given to us

55
in intuition (that is, in a singular representation). And he now held that (for us
mortals) all singular representation is sensible. It follows that concepts can have
content only insofar as they apply to potential objects of the senses. So, if we are
after norms for thought as such, and accordingly abstract entirely from sensibility,
we thereby abstract entirely from representational content, and hence, from truth.
The norms of general logic, then, could not be rooted in very general facts about
reality, as Kant’s Leibnizian predecessors seem to have thought. (Wolff: “It is plain
. . . that principles should be sought from ontology for the demonstration of the
rules of logic.”3)
The problem, then, was to explain how there could be norms for thought as
such, if thought is intelligible independently of its relation to objects. These norms
could not be grounded in metaphysics, for the reasons given above; nor could they
be grounded in empirical psychology, which tells us how we do think, not how we
ought to. So by virtue of what are thinkers qua thinkers bound by these norms?
Kant’s solution is well known, because it had a profound impact on thinking
about logic well into the twentieth century. The norms of general logic, he says,
concern only the form of thought, “the formal conditions of agreement with the
understanding.”4 That is why they are binding on thought as such, whether or not
it relates to potential objects of the senses. But “since these conditions can tell us
nothing at all as to the objects concerned, any attempt to use logic as an instrument
(organon) that professes to extend and enlarge our knowledge can end in nothing
but mere talk.”5 So we explain the universal bindingness of general logic on thought
by taking it to be formal and denying that it has any content or distinctive concepts
of its own.
This is a nice, satisfying story, but in later thinking about logic, it begins to
unravel. What happens in Frege is particularly interesting, because given Frege’s
ambition to show that arithmetic is implicit in pure logic, it really matters how logic
is demarcated. Frege retains the Kantian idea that logic is distinguished from other
disciplines by providing norms for thought as such, but he rejects the Kantian con-
ception of logic as “formal.” His rejection is, I think, overdetermined: it is due in
part to his important technical advances and in part to his philosophical differences
from Kant.6
Frege’s replacement of the old subject-predicate conception of logical struc-
ture with a function-argument conception makes Kant’s story about the form of
thought, as represented in the Table of Judgements, unavailable to him. Universality
and negation are no longer regarded as ways in which subject and predicate can be
related in judgment, but as concepts in their own right, in the same semantic cat-
egories as many nonlogical concepts. “∀x” refers to a second-level concept, just like
“applies to Socrates”; “¬” refers to a first-level concept, just like “is a horse.” If there
is any notion of the “form of a thought” available to Frege, it is the pattern of func-
tional application, but this is too meager to form the basis of anything recognizable
as logic.

56
In addition, Frege rejects some of the Kantian assumptions that required Kant
to claim that logic was “formal.” He rejects Kant’s idea that concepts have content
or significance only insofar as they can be applied to objects given in intuition (in
part because of his new function/argument conception of logical structure, which
allows him to see purely quantified judgments as relating concepts, with no “rela-
tion to an object”). He also rejects the Kantian view that all singular representation
is sensible, claiming that we can have singular thoughts about nonsensible objects,
like numbers. So unlike Kant, he is not forced to the view that norms for thought
as such must be grounded in the formal conditions of thought.
In the end, Frege clearheadedly rejects the Kantian doctrine that general logic
must be formal. He takes logical norms to be grounded in very general truths about
the world, and he takes logic to have its own concepts, from whose content it can-
not abstract. “Just as the concept point belongs to geometry,” he says, “so logic, too,
has its own concepts and relations; and it is only in virtue of this that it can have a
content.”7
The trouble is, having jettisoned Kant’s explanation of the absolutely general
bindingness of logical norms on thought as such, Frege has no very good explana-
tion of his own. What is it about thought that makes it the case that these very gen-
eral truths about the world—the truths described in Frege’s logical axioms—give
rise to norms for thought as such? On this point, he says nothing very useful, and
even hints that nothing useful can be said.
Frege’s student Carnap takes a different approach. He embraces the Kantian
“formality” idea that Frege rejected, while rejecting the conception of logic as nor-
mative for thought as such. “The formal sciences do not have any objects at all,” he
says; “they are systems of auxiliary statements without objects and without con-
tent.”8 They are, as such, completely unconstrained by facts about the world; they
constrain thought by defining a “linguistic framework” within which thought can
proceed. But because we can pick different frameworks for different purposes, no
one framework can lay claim to being “general” in Kant’s sense; that is, normative
for thought as such. We may use different, incompatible systems of rules in differ-
ent inquiries, for different purposes.
Other thinkers rejected both elements of the Kantian view of logic, finding
them insufficiently clear for scientific purposes. Since this is Prague, I must men-
tion Bolzano, who combined criticism of Kantian talk of formality with skepticism
that any principled line can be drawn between logical and nonlogical notions.9 In
the twentieth century, we find Tarski first echoing Bolzano’s skepticism that any
principled line can be drawn between logical and nonlogical notions,10 and then,
later in life, proposing a demarcation of logical concepts as those that are invariant
under permutations of the underlying domain.11 Here he appeals to considerations
independent of either generality (in Kant’s and Frege’s sense) or formality, and cer-
tainly incapable of justifying a privileged role for logical vocabulary in conceptual
analysis.

57
By the late twentieth century, things had gotten very confused. There was no
shortage of principled proposals for how to demarcate logic. All of these propos-
als made some contact with how the discipline was historically conceived. But it
had become quite unclear what the debate is really about. Reflecting on the situa-
tion, some thinkers quite reasonably concluded that the demarcation problem is a
pseudoproblem—either because logic is a family resemblance concept with no
principled definition, or because all analytic consequences are to be counted as log-
ical, or because the historical “trunk” of logic had branched into many different
things with nothing particular in common.12
What I like about Brandom’s proposal is that it connects quite directly with
what I regard as the historically central tradition of thinking about logic, the one
that runs through Leibniz, Kant, and Frege. It can even be regarded, I think, as a
“pragmaticized,” modernized version of the Kantian view. On Brandom’s view,
thought—or as he would prefer to say, discursive activity—has a form, insofar as it
is made possible by the existence of certain basic “practices or abilities.” For exam-
ple, in order to count as engaging in discursive activity at all, one must be able to
do something that counts as classifying propositions into those one would assert
and those one would reject, and one must be able to do something that counts as
classifying inferences as good and bad. Logical vocabulary is vocabulary that is both
elaborated from and explicative of these basic practices, which we might regard as
the pragmatic “form” of discursive activity. It can be legitimately applied in expli-
cating any other vocabulary, because the ability to deploy that vocabulary already
suffices, when appropriately “algorithmically elaborated,” to deploy the logical
vocabulary as well. So we have a kind of explanation of the universal applicability
and universal bindingness of logic on discourse that appeals to logic’s essential con-
nection to the underlying form or basis of discursive activity. In short, a view with
the same basic shape as Kant’s. (There are also many differences, to be sure—for
example, Brandom has no need to deny that logic has its own contentful con-
cepts—but I want to emphasize the similarities.)
Once Brandom’s proposal is in view, I think, we can see how other demarca-
tion proposals—specifically the Gentzen-inspired proposals of Popper, Kneale,
Hacking, and others13—might be seen in a similar light. For they all take the logi-
cal vocabulary to be vocabulary whose use can be explained by algorithmic elabo-
ration from certain basic abilities, like the ability to distinguish good inferences
from bad or to recognize certain kinds of patterns. The missing piece in all these
projects is an argument that these basic abilities are essential to anything that can
count as discursive activity at all.
(Here there’s plenty of room for interesting argumentation. For example: if
quantifiers are to count as logical, on Brandom’s view, it must be the case that any
autonomous discursive practice must include subsentential structure. But why
should that be the case?)

58
II

Having said why I like Brandom’s proposal, I now want to raise some questions
about it. At the heart of Brandom’s account of logicality is the notion of a “univer-
sal LX-vocabulary.” A vocabulary V is universal LX just in case there are sets PA and
PB of practices-or-abilities such that:

1. V is VP-sufficient for PA (that is, V suffices to explicate PA).


2. PA⊆PADP (that is, PA is PV-necessary for every autonomous vocabulary),
3. PA is PP-sufficient for PB (that is, PB can be algorithmically elaborated
from PA), and
4. PB is PV-sufficient for V (that is, PB suffices for the deployment of V).

The paradigm of a universal LX-vocabulary is the vocabulary of conditionals, Vcond.


Vcond, Brandom says, is VP-sufficient for Pinf, the practice-or-ability of distinguish-
ing good material inferences from bad ones. That is, Vcond is sufficient to make this
practice explicit. Pinf is, in turn, PP-sufficient for Pcond, the practices-or-abilities that
underlie our use of Vcond. That is, the practices underlying use of conditionals can
be algorithmically elaborated from the practices involved in distinguishing good
material inferences from bad ones. So we have a neat triangle. The vocabulary of
conditionals is elaborated from the very same inferential practice that it explicates.
My questions concern the key relations of PP-sufficiency and VP-sufficiency—
the L and the X in “universal LX.” Let’s start with PP-sufficiency. Brandom says that
the notion of algorithmic elaboration gives a definite sense to the claim
that the one set of abilities is in principle sufficient for the other. This is
the sense in which deploying logical vocabulary requires nothing new
on the part of discursive practitioners: anyone who can use any base
vocabulary already knows how to do everything needed to deploy any
universal LX-vocabulary. (33)

But of course this is true only for practitioners who already possess whatever abil-
ities are needed for the algorithmic elaboration of one practice-or-ability from oth-
ers: for example, the ability to select which practice-or-ability to exercise based on
the state it is in, or the ability to chain together two practices-or-abilities so that the
output of one serves as the input to the other, or to substitute one response for
another in a repertoire it already possesses. A creature that did not possess these
basic algorithmic meta-abilities would not have everything needed in order to
deploy any universal LX-vocabulary. So a more careful statement of Brandom’s
claim would be this:
Anyone who can use any base vocabulary and who also has capacities
for algorithmic elaboration already knows how to do everything needed
to deploy any universal LX-vocabulary.14

59
This restatement makes it obvious that underlying Brandom’s demarcation of log-
ical vocabulary is a pragmatic demarcation of capacities into “algorithmically elab-
orative” capacities and all others.15 What vocabularies count as universal LX will
depend, for example, on whether the capacities that count as algorithmically elab-
orating are limited to those that can be implemented in a finite state automaton. I
wonder, then: how much of the work of demarcating universal LX-vocabulary is
being carried by the underlying demarcation of algorithmically elaborative capac-
ities? How sensitive is Brandom’s demarcation of logic to this underlying demarca-
tion? And what demarcation is he presupposing? As Brandom notes, there are
serious differences in strength between different idealizations of algorithmic elab-
orability (single-state automata, finite-state automata, two-stack push-down
automata, etc.). But he does not say which he is presupposing in his analysis, and
he does not make clear how much of the load of demarcating logical vocabulary is
borne by this choice.

III

Let’s now turn to VP-sufficiency. Consider the paradigm example of conditionals.


The vocabulary of conditionals is supposed to “suffice to explicitly specify” the
practice-or-ability of distinguishing good material inferences from bad ones. On a
weak reading, the claim might be just that by using conditionals, we can partially
describe the inferential practice. On a stronger reading, the claim is that using the
language of conditionals, we can fully describe the inferential practice.
I suspect that Brandom intends the stronger reading here. He says in Lecture
1 that
VP-sufficiency is the relation that holds between a vocabulary and a set
of practices-or-abilities when that vocabulary is sufficient to specify
those practices-or-abilities. . . . VP-sufficient vocabularies let one say
what it is one must do to be engaging in those practices or exercising
those abilities. (16)

To specify a practice, one assumes, is not just to say some true things about it, but
to characterize it completely. The stronger reading is also suggested by Lecture 1’s
syntactic examples: when Brandom notes that context-free vocabularies suffice to
specify any Turing machine, he means that they suffice to specify the Turing
machines completely, not just partially. (The latter claim wouldn’t be very interest-
ing.) And it seems required for the idea that “pragmatic expressive bootstrapping”
can occur when one vocabulary is VP-sufficient for practices-or-abilities that are
PV-sufficient for another vocabulary.
However, it seems to me that in the paradigm case of conditionals, Brandom
is only entitled to the weaker claim. It is plausible that by using conditionals, we can
partially describe the inferential practice that gives our concepts their contents. But
to completely characterize this practice, we will need more expressive power.

60
One reason for this is that, in order to use conditionals to explicitate an infer-
ential practice involving sentences A, B, and C, one would need to use these sen-
tences. “If A and C, then B” expresses an inferential propriety; “If ‘A’ and ‘C’, then
‘B’,” which mentions the sentences without using them, is ungrammatical; and “if
. . . then” by itself says nothing. So a vocabulary V cannot completely describe an
inferential practice involving, say, snail talk, unless it contains lots of sentences
about snails, in addition to conditionals.
It is difficult to see how this simple point can be squared with Brandom’s view
that the vocabulary of conditionals, Vcond, is universal-LX. If Vcond includes sentences
about snails, then it is implausible that a set of practices-or-abilities that is PV-suf-
ficient for deploying Vcond could be elaborated out of a set of practices-or-abilities
that is PV-necessary for every autonomous vocabulary. However, if Vcond doesn’t
include sentences about snails, it is implausible that it is VP-sufficient for any set of
practices or abilities at all.
Even if we ignore this problem and allow the explicitating vocabulary to
include not just conditionals, but sentences for them to connect, we will still not
have enough to make a material inferential practice fully explicit. For whether one
takes the inference from A to B to be a good one will depend not just on A and B
but on one’s other commitments. For example, I might endorse the inference from
“The match is struck” to “it will light” in most circumstances, but not when I also
endorse “the match is wet.” The language of conditionals allows one to make
explicit the inferential proprieties one recognizes relative to one’s current back-
ground commitments. But doing this only partially characterizes one’s inferential
practices. Two inferential practices that agree on which inferences are good relative
to a set K of background commitments might diverge wildly on which inferences
are good relative to a different set K' To describe the difference between these prac-
tices, it seems to me, we will need more than the language of conditionals. (It won’t
help to add the background commitments explicitly to the antecedents of the con-
ditionals, because in general the required “ceteris paribus” clause will not be finitely
specifiable. As Brandom says in Lecture 4: “There need be no definite totality of
possible defeasors, specifiable in advance.”)
If conditionals aren’t VP-sufficient to specify an inferential practice, is there
some other vocabulary that is? I don’t think bringing in modal vocabulary and
counterfactual conditionals will help. Counterfactuals can give us partial informa-
tion about how the inferences a speaker endorses depend on background assump-
tions. But I don’t see how they can give us the full information about this we would
need for a complete specification of an inferential practice. The reason, again, is
that endorsement of the counterfactuals themselves will depend on background
assumptions. If I think there’s a trampoline below me, I’ll endorse the counterfac-
tual “If I jumped out of this building, I’d live,” but not if I think there is a moat con-
taining crocodiles.
A vocabulary V such that a set of sentences in the language L∪V can com-
pletely specify the material inferential practices underlying the language L will pre-
sumably need to contain a sign for entailment and a way of talking about (countably

61
infinite) sets of sentences. We’ll then be able to say things like, “the set X of sentences
entails the sentence A relative to the set Y of background commitments.” The total-
ity of true statements of this form will completely specify the background inferen-
tial practices. But now we are outside of paradigm “logical” territory. Certainly this
is a far cry from the comfortable conditional, and it seems highly unlikely that this
vocabulary could be supported by practices algorithmically elaborable from prac-
tices necessary for any autonomous vocabulary.

NOTES

1. This is a lightly revised version of my comments on Robert Brandom’s second Locke Lecture,
“Elaborating Abilities: The Expressive Role of Logic,” as delivered in Prague at the “Prague Locke
Lectures” in April 2007.
2. John MacFarlane, What Does It Mean to Say That Logic Is Formal? (Ph.D. dissertation, University
of Pittsburgh, 2000). Available at http://johnmacfarlane.net/dissertation.pdf.
3. Christian Wolff, Philosophia Rationalis Sive Logica, sec. 89. In Gesammelte Werke 2.1, ed. J. École
et al. (Hildesheim and New York: Georg Olms, 1983).
4. Critique of Pure Reason, A61/B86, trans. Norman Kemp Smith (New York: St. Martin’s, 1929).
5. Ibid.
6. For a much more detailed discussion, see my “Frege, Kant, and the Logic in Logicism,” Philo-
sophical Review 111 (2002): 25–65.
7. “On the Foundations of Geometry: Second Series,” trans. E.-H. W. Kluge, in Gottlob Frege,
Collected Papers on Mathematics, Logic, and Philosophy, ed. Brian McGuiness (Oxford: Blackwell,
1984).
8. “Formalwissenschaft und Realwissenschaft,” Erkenntnis 5 (1934); translated as “Formal and
Factual Science,” in Readings in the Philosophy of Science, ed. H. Feigl and M. Brodbeck (New York:
Appleton-Century-Crofts, 1953), 128.
9. Wissenschaftslehre, second edition, ed. Wolfgang Schultz (Leipzig: Felix Meiner, 1929). Partial
translation as Theory of Science, ed. and trans. Rolf George, (Berkeley and Los Angeles: University
of California Press, 1972).
10. “On the Concept of Logical Consequence” (1936), in Logic, Semantics, Metamathematics, second
edition, trans. J. H. Woodger, ed. John Corcoran (Indianapolis: Hackett, 1983), 409–20. Morton
White, “A Philosophical Letter of Alfred Tarski” (1944), Journal of Philosophy 84 (1987): 28–32.
11. “What are Logical Notions?” (1966 lecture), ed. John Corcoran, History and Philosophy of Logic 7
(1986): 143–54.
12. For a survey and critical discussion, see Mario Gómez-Torrente, “The Problem of Logical
Constants,” Bulletin of Symbolic Logic 8 (2002): 1–37, and my article “Logical Constants” in the
Stanford Encyclopedia of Philosophy, http://plato.stanford.edu/entries/logical-constants.
13. Karl Popper, “Logic Without Assumptions,” Proceedings of the Aristotelian Society n.s. 47 (1946–7):
251–92; William Kneale, “The Province of Logic,” in Contemporary British Philosophy, ed. H. D.
Lewis (London: George Allen and Unwin, 1956), 237–61. Ian Hacking, “What is Logic?”, Journal
of Philosophy 76 (1979): 285–319; Kosta Došen, “Logical Constants as Punctuation Marks,” in
What Is a Logical System?, ed D. M. Gabbay (Oxford: Clarendon Press, 1994), 273–96.
14. Cf. Lecture 2, p. 10: “Algorithmic PP-sufficiency is what holds in case if a system does have those
algorithmic abilities, then that is all it needs to elaborate its basic abilities into the complex one in
question.”
15. This is explicit in Lecture 2, p. 5: “algorithmic elaboration of primitive abilities into complex ones
plays the same role in pragmatic analysis that logic does in semantic analysis.”

62
PHILOSOPHICAL TOPICS
VOL. 36, NO. 2, FALL 2008

The Computational Theory of Mind and


the Decomposition of Actions

Pirmin Stekeler-Weithofer
University of Leipzig

I. INTRODUCTION

At the latest since the time of Thomas Hobbes and René Descartes, a ‘new’ image
of man begins to replace an ‘old’, religion-based, self-image.1 According to the old
image, human beings consist of a mortal body and an immortal soul. According to
the new picture, man appears as a kind of cross between an animal and a comput-
ing being. But what seems to be ‘new’ from a Christian or theological perspective,
in effect, has its roots in Ancient philosophy, where we already find a comparison
of thinking, or language-use, with calculation. The word “logos” stands for “word,”
“expression,” “language,” “mathematical ratio,” “number,” as well as for “reason.”
Reasoning as the (silent or audible) use of words is often called “logizesthai,” “rati-
ocinari,” i.e. computing. Hence, Aristotle’s formula “zoon logon echon” for man as a
rational animal supports already, at least implicitly, an understanding of man’s
thinking as calculating with expressions. Modern functionalism and the artificial-
intelligence approach to human ‘sapience’, as Robert Brandom calls it, is thus deeply
rooted in age-old linguistic analogies, partly developed by philosophers.
It is this highly attractive self-image of us humans as rational animals,2 which
Martin Heidegger opposes in his famous (or perhaps infamous) criticism of what
he calls ‘humanism’.3 In fact, Heidegger’s deconstruction of the basic contentions
in the contemporary philosophy of mind and the theory of cognition tries to show

63
(hopelessly, for most readers) that we go astray when we identify language-use with
calculating. Thinking is much more than (and something different from) comput-
ing. Language is much more than (and something different from) a mathematical
calculus.
Ludwig Wittgenstein in his later philosophy also rejects the idea of language-use
as schematic calculation and opposes what he calls the mainstream of the twentieth
century’s civilization and philosophy. Wittgenstein sees, moreover, that computing is
a very special language game. Exact, i.e. schematic, mathematical thinking presup-
poses very special linguistic techniques of regimentation. In other words, making
sense of the use of mathematical language and rules, especially in applied mathe-
matics or in the ‘exact’ sciences, e.g. in physics, presupposes a much more rigorous
understanding of language as a whole and of our common generic experience than
twentieth-century ‘empiricist’ and ‘logicist’ accounts of scientific methodology usu-
ally convey. According to these accounts, scientific knowledge as a whole appears as
a kind of unclear, ‘holistic’, mixture between empirical perception (in the extreme
case, of merely subjective sense-data) and theoretical inference.
According to Brandom’s pragmaticist approach, we use formal theories and
formal logic in order to make implicit norms of material inferences explicit. This is
an important idea, which goes back to Wilfrid Sellars. Unfortunately, however, the
idea that the rules of deduction and the axioms of formal theories make implicit
forms of material, i.e. empirical, inferences explicit may eventually mislead us. For
example, it can be abused as an excuse for not reflecting rigorously upon the com-
plicated practices of theory formation, theory evaluation, and theory application.
It is not enough to appeal to some general ‘usefulness’ of a formal explication, as
some versions of holistic pragmatism or pragmatic holism suggest. The pragmatist
and inferentialist understanding of formal theories (and their internal logics) may
be not much better off than psychology, which (still) is an ill-understood amalgam
of empirical observation and theoretical formalization or computational explana-
tion, as Wittgenstein famously put it.
These preliminary remarks are meant to set the stage for my reflections on
Brandom’s approach to artificial intelligence and analytic pragmatism.4 My leading
question is this: What does it mean to be satisfied with a (partly ‘theoretical’ or ‘sci-
entific’, partly empirical) ‘account’ of the rational faculties that make us human?
What does it mean to say that we (want to) ‘understand’ the difference between man
and animal? In our everyday life, we know this difference all too well. So, what kind
of game do we play when we ‘forget’ what we already know—just in order to be
‘reminded’ by a theoretical account of what being human allegedly consists in?5
Since the usual notion of ‘implicit knowledge’ presupposes that we can make
it fully explicit, it is often helpful to use Karl Bühler’s neologism ‘empractical knowl-
edge’6 for something we already know how to do. Such a competence is usually man-
ifested by a ‘habitus’ of correctly taking part in a joint and generic ‘practice’—at will,
as we say, when it comes to concrete actualizations of the generic competence.
When we talk about the form(s) of human life, we talk about systems of such prac-

64
tices and the corresponding actions.7 The question is what to expect from an ‘analy-
sis’ of such practices, actions, and faculties. Why are we not satisfied with an appeal
to empractical knowledge? I.e., what do we ask for when we ask for an explicit
account for how we know how to do things? And how does such an account relate to
appellative knowledge, as I would like to call the knowledge of what vague and
comprehensive titles like “thinking” and “speaking” or “acting” and “intending”
appeal to?

II. ON SECTION 1: AI-FUNCTIONALISM

This leads me to my first question: What could be “an account of what one must do
in order to count as saying something”? I.e., for what purpose do we need such an
account? And what are possible satisfaction conditions? Brandom sees, of course,
that such an account makes implicit (empractical) forms of actions and practices
explicit. Hence we need “a characterization of another vocabulary that one can use
to say what it is one must do in order to be saying something else.”8
The dream of all philosophical constructivists is to develop a pragmatic meta-
vocabulary which is, in one sense or another, “demonstrably expressively weaker
than the vocabulary for which it is a pragmatic metavocabulary.” The idea is to
develop a language that is somehow less complicated than the language we want to
give an account of. As a result, we can use this metalanguage on the meta-level in
order to explicate our empractical knowing-how that is involved in using the
object-language for which we wish to develop a reflective reconstruction, i.e. the
‘account’ in question. A paradigm for such a project can be found in Paul Lorenzen’s
construction of an ortho-language and his distinction between a merely comment-
ing ‘para-language’ and a preliminary ‘protreptic’ language, in which the goals of the
(re)constructions are (vaguely) specified. Brandom calls an enterprise like this
“pragmatic expressive bootstrapping.” Another example for a similar program is
W. V. Quine’s story about an alleged behavioral foundation of language-use and the
formal or logical roots of reference in his masterpiece Word and Object (1960).
But I already have an initial problem with Brandom’s own most cherished
example. The example appears in chapters 1 and 2 under the title “Syntactic Prag-
matic Bootstrapping within the Chomsky Hierarchy of Grammars and Automata.”
My question is this: In which sense is it true that “context-free vocabularies are VP-
sufficient to specify Turing machines (two-stack push-down automata)”?
Things get more perspicuous if we replace Turing machines by recursive func-
tions. Then it appears as a trivial fact that the recursive definitions of syntactically
complex standard names for such functions are much more ‘basic’ and ‘primitive’
than the functions named by these terms. But the use of these terms is not com-
pletely defined by the syntactic definition of the terms, as I will show. Hence, the
‘bootstrapping’ of making explicit what recursive functions are has to appeal to
our empractical knowledge of how to use the terms. We tend to forget this. The fact

65
that we can realize some Turing machines by real automata is the ground for this
forgetfulness.
This much is true, however: The primitive steps used in recursive calculations
and in Turing machines follow extremely easy schemes of calculating behavior. But
meta-level judgments by which we determine whether an automaton is operating
(functioning) properly or whether it is defective (‘dead’, as we say metaphorically)
transcend these schemes by far, as we shall soon see.
As it is well known, recursive functions are defined by descriptive finite terms
T(x,y, . . . for only partially defined recursive functions in the natural numbers or,
what is essentially the same, for recursive search procedures or proto-Turing machines).
I do not specify here the internal structure of T; but T ‘describes’ a systematic search
for a numerical value for any given numerical arguments m,n, . . . . The ‘machine’
stops, if it finds a value. The important halting problem focuses on the crucial word
“if”: In order to know if a proto-Turing machine halts for any m,n, . . . or if a par-
tial recursive function is a total recursive function, we need a proof that T(m,n, . . .)
gives a value for any number m, n, . . . . Such a proof must show that, for any single
‘argument’ m, n, . . . , the proto-machine generates a value.
As a consequence, we (must) use half-formal truth-valued arithmetic, not just
axiomatic Peano-arithmetic with its merely schematic deductions of mathematical
formulas, when we want to talk about the total functions represented by semanti-
cally well-formed names for or by some explication of a particular (abstract)
‘Turing machine’. A definition or a proof is called ‘half-formal’ if a rule of the fol-
lowing sort with infinitely many premises is involved: The value for “∀x.A(x).” (or
for “all x.A(x).” is defined as T (for Truth or the True) if and only if the value of A(t)
is the True for any semantically well-formed number term t, or F (for Falsehood or
the False) in any other case. We can use induction on the logical depth of a sentence
p (the number of logical signs occurring in p) in order to see that any complex sen-
tence p of elementary arithmetic is, in the sense just explained, true or false. I.e., by
the above explanation, we have fixed exactly one truth-value for any such p, even
though we do not know of general procedures for how to decide which truth-value
applies for any arbitrary p. But we know perfectly well what it means to prove for
any arithmetically well formed p that it is true. It means to show that the value that
we have attached to p in the ‘half-formal’ definition above is T, not F. Half-formal
arithmetic is, in a sense, the metalanguage for any kind of axiomatic system of for-
mal inferences or deductions. As Kurt Gödel’s proof of his first incompleteness the-
orem (and also Gerhard Gentzen’s proof theory) shows, the very concept of a
formal deduction is structurally identical with a finitely branched tree in which
each branch is finite. A half-formal proof shows that in an appropriately defined
infinitely branched tree (fitting to a corresponding half-formal definition of truth
and to possible proofs), each branch is (or ‘must be’) finite. Natural language is ‘the’
metalanguage for explaining the notions of truth and proof in half-formal systems.
Unfortunately, the relation between a fully formal axiomatic system and a half-
formal interpretation is often confused with the relation between a fully formal

66
object-language and a fully formal metalanguage. The latter is a relation between
an axiomatic object theory T and a meta-theory T* that contains a truth-predicate
for the object theory T. Such theories are investigated in the framework of Tarskian
‘semantics’ or axiomatic model theory. In fact, Alfred Tarski asks for a possible con-
struction or the existence of a meta-level axiomatic system T* that contains some
truth-predicate of the form “x is true” for a given deductive system T, such that
(hopefully) all closed formulas of the form “Np is true if and only if p” can be
derived in T* for any sentence (‘closed formula’) p of T and an operator N that
turns (all) T-sentences p into (unique) names Np in the meta-theory T*. Np is a
name of the sentence p. The process of abstraction involved in the naming-opera-
tion is often overlooked or undervalued when the naming-operation Np is articu-
lated in the usual form “p.”
Tarskian semantics proper does not know of (half-formal) definitions of truth-
values. Rather, there is a widespread confusion between half-formal models for
axiomatic systems explained in normal language and merely axiomatic ‘semantics’.
This confusion lies at the ground of Davidson’s and post-Davidsonian theories of
language, meaning, interpretation, and truth. The crucial point is this: Tarskian
semantics does not have the expressive power of defining standard models at all but
allows only for considerations of the relative consistency or coherence of fully formal
axiomatic theories or deductive systems. Davidson’s theory of radical interpreta-
tion suffers heavily from this defect.
Another basic point relates to the difference between formal ‘languages’ and
‘theories’ on one side, and ordinary languages on the other. We always have to pre-
suppose the empractical mastery of ordinary language when we employ it to
explain the formal rules of syntax and the half-formal rules of semantics for math-
ematical systems. The crux of finitism, e.g. in intuitionism and axiomaticism in
twentieth century philosophy of mathematics, from L. E. J. Brouwer and David
Hilbert down to Dummett, is that these main movements against so-called math-
ematical Platonism, make us believe that we, like machines, are only capable of han-
dling finite schemes of calculation and that we allegedly do not ‘understand’
truth-value arithmetics and infinite proofs. We allegedly can ‘understand’ only
‘finite proofs’, that is deductions of finite formulas or sentences according to
schematic rules or procedures of computation. But this claim of ‘finitism’ is blatantly
wrong. We widely, and competently, use half-formal arguments in mathematics.
And it is a very good idea that we do so.
Standard arithmetic is even the semantic metalanguage for any system of
recursive functions9 or computing machines. There is no merely syntactic boot-
strapping for this semantic system at all. The reason is this: Axiomatic systems in
the formal sense, by which we want to produce true arithmetical sentences recur-
sively, can be evaluated as correct (or consistent) only by presupposing the very con-
cept of arithmetical truth in one way or another.10 Hence, the significance of the fact
that “context-free vocabularies are VP-sufficient to specify Turing machines (two-
stack push-down automata)” is fairly unclear. For the word “specify” refers only to

67
the description of the operations or search procedures. It does not refer to our pos-
sible knowledge about the functions represented by these operations.
The second part of Brandom’s claim is, in a sense, harmless. It says that Turing
machines “are in turn PV-sufficient to deploy (produce and recognize) recursively
enumerable vocabularies.” But the hidden problem is here, again, the halting prob-
lem. The difference between merely partial recursive functions and totally defined
recursive functions or merely proto-Turing machines and Turing machines is, so to
speak, crucial for distinguishing between a ‘machine’ without bugs and a machine
with bugs. Any real computing machine is built in a way that it either presents us
with a value (of the input argument) or stops running at some place. But if we do
not know if the machine has ‘bugs’ (which presupposes a comparison of the pro-
gram with an intended total function), we do not know if the machine stops (with-
out giving us a value) because the calculation takes too long, or because its correct
value is not defined at all. In order to know things like these, and in order to make
this knowledge explicit, we need the full-fledged concept of arithmetical truth, as it
is half-formally defined for quantificationally complex arithmetical sentences.
Brandom’s claim that the isomorphism in Descartes’ algebraization of geo-
metry “amounts to an encoding of semantic properties in syntactic ones” shows
that reflections like these are relevant. For Brandom’s claim is just not true. Algebraic
geometry is, like arithmetic, no purely syntactic system. It presupposes a half-formal
semantics of truth-values, as I have recently shown in much more detail.11
Brandom asks if “computer languages” are “sufficient pragmatic metavocabu-
laries for any autonomous vocabulary?” The correct question would rather be,
however, why they are not. My answer runs like this: Any competence to use an
autonomous vocabulary must include the faculty to reflect upon, and change,
schematic rules. Precisely for this purpose, we need the faculty of making implicit
norms explicit. This is done by employing terms that can be used as names for
rules, and by employing sentences and forms of judgments in which we talk about
the rules. This includes not only the faculty of following the rules, but also the fac-
ulty of not following them—if only in order to demonstrate the autonomy of
speech acts, or in order to show publicly that following rules is always a free (spon-
taneous) action which must be distinguished from (physically caused) behavior on
one side, and from mere possibilities on the other. In other words, machines do not
follow rules in the sense of a free, autonomous action. And they cannot talk about
rules and functions in the proper sense of ‘talking about’.
We cannot figure out if a being is a person by just comparing her behavior with
what we expect as proper conduct. What we need is a test if the being or ‘something’
can act, i.e. do things or refrain from doing things ‘at will’ or ‘spontaneously’,12 just
as I can ride my bike or dance or utter a certain word at will—or refrain from doing
so, and this repeatedly. In fact, I must be able to show it, virtually, to anybody at any
time—according to the conceptual rule ‘hic Rhodos, hic salta’: if you can dance,
dance here! A machine cannot show things like this. And merely fictional ‘persons’
in utopian stories like Pygmalion or Frankenstein obviously also do not fulfill the
Rhodos condition.

68
The basic problem is to give a correct account of how we test abilities or facul-
ties and their limits: Testing the faculty of deploying an autonomous vocabulary
consists not only in controlling whether the performance meets certain standards,
or agrees with our normative expectations, of correctness. Free and spontaneous
action sometimes fulfills the conditions of following explicit rules or principles and
sometimes not. A child shows her spontaneous ability of, say, singing a song not
only by singing it when asked, but sometimes, or even often, also by refusing to sing
it. This fact makes it impossible to decide whether a certain person has certain
autonomous abilities, faculties or a certain empractical knowledge or knowing-how
like the ability to say something or to use an autonomous vocabulary (autonomously),
just by looking at singular instances of a certain behavior.
When we now turn to the “program of artificial intelligence,” Brandom says
that “artificial intelligence is to be real intelligence, created by artifice. But artificial
diamonds are not real diamonds created by artifice. They are fake diamonds.”
Brandom speaks of a “terrible way to pick out the topic.” And he is right, “real dia-
monds created in a laboratory are synthetic diamonds.” But synthetic ‘intelligence’
is, indeed, only so-called intelligence. This disagreement does not only hang on
words. Its deeper point is this: We should use the expression “artificial intelligence”
as a label for what researchers in the important field of ‘intelligent systems’ really do
and really achieve. This (theoretical and practical) branch of computer science cre-
ates only fake diamonds, i.e. intelligent systems so-called. Brandom, however, seems
to presuppose that real sapience can be reconstructed as if it were synthetic sapience.
Brandom certainly disagrees with this assessment. For he does not want to do
what he in fact does. He says, “what is at issue is not intelligence—a phenomenon
that admits of degrees and has its primary application to comparative assessments
within the discursive community. It is really sapience that is at issue—something we
language-users have and cats do not.” But this does not help much. For it is just
utopian to claim “that a computer could in principle do what is needed to deploy
an autonomous vocabulary, that is, in this strong sense, to say something.” I agree
that it is not easy to show why this is utopian. But I stick to my claim that it is.
Let me note, first, that computer languages are not even sufficient to specify
“abilities that are PV-sufficient to deploy an autonomous vocabulary” if we think
of the autonomous language of full-fledged arithmetics, taken to consist not just of
a system of configurations of basic words (vocabulary and syntax), but also of a
semantically guided language use, at least with respect to the (half-formal) truth
conditions for arithmetical sentences.

III. ON SECTION 2: CLASSIC SYMBOLIC


ARTIFICIAL INTELLIGENCE

I do not want to question that thought-experiments can produce feelings of satisfac-


tion with respect to certain claims. But I do question the whole method. Arguments

69
appealing to intuitions and feelings do not help at all. This criticism of method
applies especially to all feelings of satisfaction with functionalism, i.e. with main-
stream philosophy of mind in the second half of the twentieth century. For any seri-
ous assessment of the real achievements in philosophy, computer science, psychology,
linguistics, or other cognitive and neuro-sciences, such feelings, produced by
thought-experiments, should not count at all. What we rather need is a critical and
realistic assessment of the whole program of twentieth-century empiricism and
naturalism, especially for what I would like to call Utopian Artificial Intelligence.
To what extent are the accounts of human (sentience and) sapience provided
by the framework of artificial intelligence just fake accounts, i.e. merely analogical
models which still have to be interpreted in an appropriate way, as metaphors
always have to? I.e., under which aspects are the analogies enlightening, under which
readings misleading? They can, of course, provide some ‘true improvements’ of our
self-image in comparison to some outdated weltanschauung or ideological image of
man. But it may happen as well that we fight an anachronistic battle, if the theories
of the opponents are already completely outdated. For example, believing in an
immortal individual soul is no life option for any serious philosophy anymore, for
at least two centuries.13
Now, the real challenge for functionalism is, as Brandom says, understanding
beliefs and intentions, rather than pains or sensations (e.g. of red). But for 2,600
years, since Heraclitus and Parmenides, philosophy knows (and therefore anybody
is in principle able to know) that a belief that p must be distinguished from, and at
the same time related to, saying aloud or silently thinking p*, where p* is one of the
many possible linguistic or semiotic representations of p. Moreover, the relation
between p* and p is just the relation between logos and eidos, expression and its con-
tent. Even though I would be the last not to agree that meanings and contents can-
not be grasped independently from context, still we should not get drawn into a
sweeping contextualism. Nevertheless, it is true that the meaning of a term is the
role it plays in a more complex system. In the case of words, the important contexts
are sentences. In the case of sentences and utterances, the important contexts are
not only the situations in the world referred to, but the situation of performing
speech acts, which involve (a) the role of the speaker, (b) the role of actual and
possible hearers, (c) the individual act, (d) the generic action performed, (e) joint
generic practices defining the generic action and, in the end, (f) our whole human
life.
We certainly need a via media between traditional dualism (of mind and body)
and modern naturalism. But I doubt that the word “function” is of much help in
finding one.14 It only says that something plays some part or role in a larger process.
Functionalism presupposes, in the end, that words and sentences have functions
(meaning or reference) in speech acts just as things in physical processes. The cru-
cial differences are, however, those between merely physical and biological processes
on one side, individual and joint action on the other. Talking and listening, writing
and reading are such cooperative practices.15 I.e., written words and sentences rep-

70
resent possible parts of speech acts, which, in turn, are already parts of a commu-
nicative practice. Words and sentences get their ‘meanings’ only mediated by this
practice.
This is, in a sense, Brandom’s own insight. We could name it ‘pragmatic con-
textualism’. But now we should handle with uttermost care a claim like the follow-
ing: “Functional vocabulary applies exclusively to physical objects.” For this claim
is already not really true for life processes. If we read the expression “physical object”
as referring to an object of physics (taken as a particular science in the concert of sci-
ences and human knowledge) it is just wrong that a living body (in German Leib)
is a ‘physical’ object with some emerging biological property, called “life.” Only dead
corpses, stones, or machines are physical objects proper.
The real problem is a non-obvious ambiguity in our usage of words like “physi-
cal” and “physics.” There are parallel ambiguities in our talk about ‘physical objects’.
One sense is ‘being an object of the science of physics’. My body can, of course, be
treated as an object of science, and of techniques, of physics, chemistry, biology, or
of medicine, for that matter. But it is still questionable if my body, as I am still liv-
ing, can really be described and explained as a whole by the methods of contempo-
rary physics.
The deep problem is that we do not know what we say (today) when say that
we eventually might be able to explain human life completely by physics or that it
is an empirical question if the search for such an explanation might succeed or not.
Nobody should deny the truism that if we enlarge the institution of physics in a way
that it incorporates all sciences, including biology, psychology, perhaps even all
humanities, then we can ‘explain’ human life in such an extended ‘physical’ science.
But then we just return to an ancient reading of ‘phusis’, which just means ‘all there
is’, as the etymological connection of “phuein” with “to be” (cf. Latin “fui” or German
“Wesen”) nicely shows.16 If we talk that way, we just annihilate our disciplinary divi-
sion of labor and neglect the Rhodos principle. The principle bars our judgments
from (usually sweeping) appeals to a possible utopian future of physics or natural
sciences: We have to use our concepts today as they are. And this includes the par-
ticular forms in which they refer to our world of real (joint and generic) experience
via our institutions of making differences and explaining different processes, behav-
ior, and individual and joint actions in different vocabularies by using different cat-
egories and systems of generic inferences. We have, so to speak, to dance here and
now conceptually. It is absolutely crucial for any critical science and philosophy to
acknowledge that we do not really understand speculations about what we might
be able to do empirically in a distant future, just as we do not understand stories
about a possible paradise inhabited by angels. What we understand about these sto-
ries is only what we know from today’s world around us.
If we stick to our actual concept of physics and physical objects, it is just wrong
to claim that we can explain life processes physically. In precisely this sense we know
why analogies of valves and thermostats are misleading metaphors. They mislead
us into the category mistake of assuming that the function of a (merely physical)

71
tool, defined by our goals, plays a similar role as the functions of organs or other
animal parts for a good animal life, or that this role was similar to the roles of sym-
bols in joint human actions or practices.17
John Searle characterizes the “strong thesis of AI” or “classical symbolic AI” by
the slogan: “Mind is to brain as software is to hardware.” Already this analogy over-
estimates what computers or “multistate transducing automata” can do. For even
sentience used in the self-orientation of animal movement and animal behavior (as
it may be directed by nonpropositional desires18) transcends the possibilities of
computers and robots by far, as Searle rightly reminds us in this context under the
label of ‘intentionality’. But there is still the quite important terminological distinc-
tion between animal and human intentionality. The distinction goes back to Franz
Brentano and belongs to the central insights of Husserl’s and Heidegger’s philo-
sophical phenomenology: Only human intentionality, not animal desire, refers to
nonpresent possibilities. Such possibilities can (and often must) be made present
by symbolic representations, for example by linguistic propositions. These propo-
sitions describe or define conceptually the fulfillment conditions of the intentional
relation to the possible state.
In a sense, Brandom focuses on this difference between human intentionality
and animal desires.19 And he agrees that thinking (taken as the faculty of sapience)
does not just reduce to “manipulating symbols according to definite rules (the algo-
rithms implicit in automaton state-tables).” Moreover, “physics does not find mean-
ings or semantic properties in its catalogue of the furniture of the world.”20 Only
human symbolic actions, or traces of them, can have meaning. Hence, the operations
of transducing automata are ‘meaningful’ only insofar as we can view them as the
traces of the human action of building, running, and controlling the automata with
respect to the purposes for which we have created them.21 But then, Brandom
returns nevertheless to the crucial claim of Artificial Intelligence that manipulating
symbols according to some internal computational pattern “is not just a simulation
of thinking, but is thinking itself.” This is “a very substantive, potentially controver-
sial theory, with a correspondingly large burden of proof,” as Brandom says. In con-
tradistinction to Brandom, who believes that it is an empirical question if the theory
is true, my contention is that the theory is conceptually wrong, or rather, that we
can show that (and why) we should never believe it.

IV. ON SECTION 3: A PRAGMATIC CONCEPTION


OF ARTIFICIAL INTELLIGENCE

The crucial point is that I do not believe “that classy AI’s focus on the Turing test is
appropriate.” I rather would like to distinguish Utopian Artificial Intelligence, or
UAI, from real artificial intelligence. Real artificial intelligence develops and inves-
tigates intelligent systems. Real artificial intelligence does not make use of the Turing
test at all. It only asks what we really can do with our computers and robots.

72
In contrast to real artificial intelligence, Utopian Artificial Intelligence with
capital letters asks us, under the heading of the so-called Turing test, to judge from
mere observations and evaluations of the verbal behavior of a certain opponent in
a game of questions and answers if the opponent is a machine or a human being.
Of course, we should assume that the machine has some perception devices as
inputs and even ‘can do’ some things as outputs. I.e., we rather should think more
along the lines of a robot as opposed to a mere computer. But we can notice the deep
problem of the task nevertheless if we translate the Turing test into arithmetic lan-
guage. Then it is totally analogous to the following task: The tested ‘person’ is pre-
sented with a finite sequence of numbers produced by the other party, the tested
‘machine’. The person is asked to guess which machine or recursive function has
produced the sequence. It is trivially clear that this task will result in a predictable
loss of the tested person. For in order to know if a recursive function produces the
sequence and which function it is, the finite sequence does not suffice. What I have
to know is the function itself, i.e. a representation of it as a whole by a term for a
recursive function or by a description of the program of the machine. In other
words, it is trivially true that we cannot reconstruct the program of a (Turing)
machine by the mere observation of some of its outputs, or rather, of some observed
pairs of input and output.
Notice that to know that we deal with a machine and to know its program is
almost one and the same thing. For if I know the program, I know that it is a
machine. If I do not know the program, I do not know if there is, by chance, a human
being behind the screen that answers my question just like the dwarf in the famous
chess machine of the eighteenth century. In order to know for sure that I deal with
an automaton, I obviously have to know not only something about the observed
behavior of my opponent. I have to know something totally different. In the end I
have to know, again, the function or program.
Hence, the whole setting of the usual Turing test is already flawed. It is flawed
because the tested person does not have a chance to succeed at all, on a priori
grounds, if the program of the tested ‘machine’ is ‘clever enough’. Or rather, there
are no appropriate nonsubjective winning conditions defined. The questions I am
allowed to ask the supposed ‘machine’ are not described precisely enough, not to
mention how I am to evaluate the answers. May I ask, for example, how my oppo-
nent chooses to calculate the results, for example, if the input is converted into a
coding number, and how? And if I may ask such a question, it is not clear if the
machine is allowed to lie.22 It is obviously difficult to find out if the machine lies in
such cases. In other words, the illusions created by the Turing test are just conse-
quences of the fact that I am, by definition, barred from some knowledge other
people really have. Hence, the case is fairly similar to the following famous exam-
ple: When I, standing in front of a barn-façade, am not allowed to enter or walk
around it, I cannot ‘know’ (for sure) that it is a barn.23
If we consider this, it becomes clear why it is unclear to say, with Brandom, that
“something passes” the Turing test as a talking test “if one cannot tell it from a

73
human speaker.”24 The basic problem of the Turing test is that the person tested is
not allowed to use structural knowledge about the form of operation of a computer
and a recursive function. If I were allowed to use such knowledge, I would be able
to represent the inner operations of my opponent by something like a two-place
recursive function f(x,y) of the natural numbers into the natural numbers, where y
is a kind of parameter defining a set of one-place recursive functions fy(x). If I can
do this, I already know the limits of the operations the opponent is capable of per-
forming. I.e., I know that the opponent is an automaton.25
Now let us assume that there is some ‘thing’ that ‘behaves’ in a certain way and
I want to know if it is a (hidden) (Turing) machine. The right thing to do is this: I
go and ask the people who have constructed the machine and know about the pro-
gram. But precisely this is not allowed by the whole setting of the Turing test.26
It is rather amazing that no one seems to see that this procedure is a way of
begging the question. For if there is a person or only a random process behind the
screen of the Turing test, the situation can be presented in the following way:
Assume that there are two observers, who both do not know if a person has pro-
duced the behavioral results they observe. And assume that the one says it was a
person, the other says no. Then the two behave just like a believer in God and a
nonbeliever.
Of course, we can implement random devices and we can build perception
machines reacting to their surroundings in a way that it is actually impossible to pre-
dict what they will do. But this does not turn the machine into a non-machine. It
is not prediction as such that makes a crucial distinction here, but the possibility to
freely or spontaneously react to knowledge about the program of such a machine.
Nobody can be barred from treating a machine like a person, just as nobody
can be hindered in treating animals, rivers, or trees as gods. The point is that our
distinction between what is and what is not reasonable to do rests, in the end, on
all real knowledge available to us. The same holds for conceptual inferences. They
always have a certain material background. Their real ‘place’ in our joint practice
and experience distinguishes them from any merely ‘formalist’ use of words and
sentences. Such formalist rules can make the content of the words empty, ‘utopian’,
i.e. place-less, without a topos or topic. In fact, the real problem of empiricism is its
abstract nominalism. It presupposes a formalist split between the ‘empirical’ and
the ‘conceptual’, where the former is interpreted as mere sensory input, the latter as
a merely formal verbal processing. Moreover, real knowledge and conceptual infer-
ence surpasses by far ‘my’ parochial knowledge, which, as such, depends on too
many contingent facts of my subjective history and perspective(s).
It is to be admitted, though, that the distinction between singular knowledge
claims in an I-mode, limited by the subjective perspective of the individual speaker,
on the one hand, and joint and general knowledge in a we-mode, on the other
hand, is difficult to grasp. The problem lies in the difficult differentiation between
‘us’ as a singular group of individual people and ‘us’ in a generic sense. This diffi-
culty makes it appear as if our knowledge about how to distinguish between men

74
and machines could be of the same form as my knowledge. But this is not so. In our
example, at least someone must know that there is a machine and not a person
behind the screen. And if he knows this, it is because he knows something about the
construction of the machine, its program. This means that he knows something
about the recursive function defined for the coding numbers represented by the
machine: If the machine does not calculate this function, the machine is defective:
Our knowledge about the function represented by the machine counts as a norm
for evaluating the machine as still functioning, as not having a defect, as giving the
correct results in a calculation.
Brandom says about the Turing test that there “is just no point in insisting that
something that is genuinely indistinguishable from other discursive practitioners
in conversation—no matter how extended and wide-ranging in topic—should
nonetheless not be counted as really talking, so thinking (out loud), and deploying
a meaningful vocabulary.” But notice, first, the misuse of the word “indistinguish-
able” in the whole setting is of a kind that it excludes the relevant distinctions a pri-
ori. Second, the very phrase “no matter how extended and wide-ranging in topic”
does not apply for the set-up of the Turing test at all.
Since all (at least most) claims of “AI-functionalism” or “automaton function-
alism about sapience” rest on sweepingly intuitive answers to the Turing test, it is
just an empty promise to say that ‘we’ can give an AI-account of “what it is in virtue
of which intentional-state vocabulary such as ‘believes that’ is applicable to some-
thing,” and “the capacity to engage in any autonomous discursive practice, to deploy
any autonomous vocabulary, to engage in any discursive practice.” But even though
I do not acknowledge “the criterial character of the Turing test” at all, I certainly
agree with Brandom that it would be still “a long way to endorsing the computa-
tional theory of the mind” provided by ‘classy’ (or ‘utopian’) artificial intelligence
(UAI). And I agree that “the most important issue is the claimed algorithmic char-
acter (or even characterizability) of thought or discursive practice. The issue is
whether whatever capacities constitute sapience, whatever practices-or-abilities it
involves, admit of such a substantive practical algorithmic decomposition.”
But even classical arithmetic and mathematics only seemingly admit of a ‘sub-
stantive’ algorithmic decomposition. With Frege, we understand the importance of
syntax for semantics. But this does not mean at all that we can reduce (with Rudolf
Carnap, Tarski, Quine, or Haugeland) the semantics of arithmetic or set theory to
the syntax of formal deductions from Peano-axioms and from the axioms of
Zermelo and Fraenkel. Axiomatic-deductive systems are only technical means to
facilitate our linguistic representations of some proofs. They do not define the
inferential content of sentences. They only help to make proofs more general. In
other words, formal axiomatic systems definitely do not make the concept of proof
and truth, semantic content and content-dependent inference explicit.27 But this is
precisely what Brandom purports.
Brandom believes that the “principal significance of automata” lies “in their
implementing a distinctive kind of PP-sufficiency relation.” The idea is to reconstruct

75
discursive abilities as complex abilities. I.e., they are supposed to appear as a kind
of ‘combination’ of more primitive abilities, which are, with respect to an algorith-
mic elaboration of these ‘combinations’, not discursive themselves.
In a similar vein, the early Wittgenstein tried to explain sentences with empir-
ical content as logically complex sentences. In his picture, the ‘primitive’ sentences
do not express content. We must learn to use them by training. For this, he assumes
some deep-structural coordination of observation (Anschauung) with (holistic)
‘names’. Or rather, he thinks that elementary sentences are somehow glued to ele-
mentary states of affairs or Sachverhalte. This gluing defines, in Wittgenstein’s
model, the projection method of empirically contentful, true or false, sentences, by
which we can refer to (possibly non-present) states of affairs (komplexe Sachlagen).
Brandom’s version of “AI-functionalism” now claims, under the heading “algo-
rithmic pragmatic elaboration” (APE), “that there is a set of (primitive) practices-
or-abilities” which “can be algorithmically elaborated into (the ability to engage in)
an autonomous discursive practice,” such that every primitive practice or ability can
“be understood to be engaged in, possessed, exercised, or exhibited by something
that does not engage in any ADP (autonomous discursive practice).”
I agree that we should not overestimate the role of symbolic acts for thought
and sapience and put actions more to the front. But we should not underestimate
symbolic acts and practices either. Be that as it may, Brandom wants to focus on the
“PP-sufficiency of the algorithmic elaboration” of complex faculties, including the
faculties of using symbols in communication, and in the planning of one’s own
actions. In a sense it may be true that “for any practice-or-ability P, we can ask
whether that practice-or-ability can be (substantively) algorithmically decomposed
(pragmatically analyzed) into a set of primitive practices-or-abilities.” And we are
often interested in a methodical order of learning and training, such as in the case
of multiplication or subtraction of decimal numbers or in the decomposition of
division into a systematic search procedure using multiplications and subtraction.
For Brandom, it is an empirical question of whether we really can decompose,
say, piano playing, riding a bicycle, or playing the violin in such a ‘substantial’ way.28
“So the question of whether some practice-or-ability admits of a substantive prac-
tical algorithmic decomposition is a matter of what contingent, parochial, matter-
of-factual PP-sufficiencies and necessities actually are exhibited by the creatures
producing the performances in question.” Let us notice, however, that it heavily
depends on our notion of ‘empirical’ and ‘matter of fact’, ‘contingent’ and ‘concep-
tual’ whether we want to say that it is an empirical and contingent fact that, say, my
brother was able to learn German, whereas my brother-in-law was able to learn
French, or not. The point is that generic statements about what a species of crea-
tures can do are in a fairly different sense empirical and contingent than statements
about singular objects or facts—like the one that, say, a certain individual is men-
tally retarded by birth and cannot learn a language at all. The latter are in a much
stronger sense empirical and contingent than the former.
I agree, of course, that the question of ‘learnability’ “is very general and
abstract, but also both empirical and important. It is a very general structural ques-

76
tion” about certain abilities of members of a species . . . and that this “issue, as such
. . . has nothing . . . to do with symbol-manipulation.” Brandom suggests “that we
think of the core issue of AI-functionalism as being of this form. The issue is
whether whatever capacities constitute sapience, whatever practices-or-abilities it
involves, admit of such a substantive practical algorithmic decomposition.”
Any differential analysis of the faculties of men, primates, and of cunningly
invented robots will have to take certain matters of fact into account. But its result
will be, nevertheless, conceptual accounts, as I would like to call any account which
is able to explicate generic similarities and distinctions. In other words, we should
identify the conceptual with the generic (Hegel’s category of Allgemeinheit), the
empirical with the singular (Hegel’s category of Einzelheit). Then we can see in
which sense empirical or singular facts are contingent, and in which sense generic
facts are not just contingent, but define the modality of conceptual necessity.
Even though Brandom does some important terminological work here, it is
not enough. He distinguishes “classical symbolic AI-functionalism” as merely one
species of “algorithmic practical elaboration of AI-functionalism.” However, we
would mislocate the central issues if we only focused on the symbolic nature of
thought. But we can show even for mathematical thinking that there is no suffi-
cient “practical algorithmic analyzability” of mathematical proofs. If we widen the
scope to “whatever practices-or-abilities,” as Brandom suggests, we should distin-
guish extremely carefully between the question whether a certain ability is a nec-
essary condition for full-fledged sapience and the question whether it is already
sufficient.

V. ON SECTION 4: ARGUMENTS AGAINST AI-FUNCTIONALISM:


RANGES OF COUNTERFACTUAL ROBUSTNESS FOR
COMPLEX RELATIONAL PREDICATES

Mathematical (arithmetical) truths can be seen as results of our reflections on the


very possibilities and limits of using schematic rules altogether. As such, the notion
of mathematical content by far transcends merely syntactic isomorphy, which
therefore is not sufficient for mathematical contentfulness.
My arguments for this claim are completely independent from any preference
for the first-person point of view or any assumed limitation of a “third-person
point of view,” as Searle has epitomized it in his Chinese room thought experiment.
Searle talks about an automatic translator who does not comprehend anything
itself. I agree with Brandom that the appeal to a “first-person understanding” does
not lead us anywhere. The real problem has to do with the “particular variety of
pragmatism” in question, viz. of ‘classy’, i.e. Utopian, Artificial Intelligence. The
problem is the corresponding concept of (complex) action. In a sense, I agree with
Hubert Dreyfus’s contention that UAI sweepingly claims that it is possible to
explain any such action computationally.

77
Brandom, on the other side, hails “the relatively precise shape” that artificial
intelligence “gives to the pragmatist program of explaining knowing-that in terms
of knowing-how: specifying in a nonintentional, nonsemantic vocabulary what it
is one must do in order to count as deploying some vocabulary, hence as making
intentional and semantic vocabulary applicable to the performances one produces
(a kind of expressive pragmatic bootstrapping).” But no content is produced by
exact rules or schemes of verbal and practical inference, as an automaton can per-
form them. Content rather is a form of joint action, of communication and coop-
eration. The forms and norms that make this jointness possible are not at all ‘exact’.
For the jointness must be a result of free actions on the side of the persons taking
part in the cooperation.
In other words, comprehending content presupposes the faculty of free action
and free judgment. No automaton, no machine, can have such a faculty, or else it
stops being a machine. Even proving in mathematics, the ‘mother of all exact sci-
ences’, is in its core not at all merely exact rule-following and hence cannot be suf-
ficiently elaborated in this way, pace Hilbert, Carnap, or Brandom.
It is already wrong to assume a mere “stimulus-response vocabulary” as “VP-
sufficient to specify all the primitive abilities that can serve as inputs to the process
of algorithmic elaboration.” The reason is that if we widen the scope of basic abili-
ties or primitive capacities in a way that goes far beyond what a machine can do, we
use the word “machine” only metaphorically. Then, of course, nobody can deny
that there is some “algorithmic decomposability of sapience.” The question is how
we should understand such a metaphor, and how to make its true content explicit
in a more literal way. Already Frege’s philosophy of language and Wittgenstein’s
Tractatus contain metaphors like this. They use analogies with formalized languages
of mathematics when they ‘explain’ or ‘explicate’ semantic competence by logical
decomposition. Brandom says, accordingly, that the decomposition should be
substantive. I.e., the basic thing we start with “must be something that could be
exhibited by a creature that does not deploy an autonomous vocabulary.” And he
continues: “That is a substantial restriction—one that very well may, as a matter of
empirical fact, rule out the abilities I just offered as examples of sophisticated dis-
criminative and performative abilities that would otherwise qualify as ‘primitive’
for the purposes of practical algorithmic elaboration.”
Brandom himself wants to “find some aspect exhibited by all autonomous dis-
cursive practices that is arguably not algorithmically decomposable into nondiscur-
sive practices-or-abilities.”29 And he finds it in “the practice of doxastic updating—
of adjusting one’s other beliefs in response to a change of belief.” This is the most
interesting core of his argument. For now he himself comes back to the point that
our particular way of dealing with (relevant) possibilities makes and marks the dif-
ference not only between us humans and animals, but also between men and
machines. This brings Brandom back to considerations we can find in Husserl and
Heidegger (if we look for them in an appropriate way).

78
The practice of doxastic updating “is a PV-necessary aspect of the deployment
of any vocabulary,” for it is necessary for making claims, and “of being a reason for
or against” such claims. The crucial question is “what follows from it and what it
follows from, what other commitments and entitlements the various commitments
it can be used to undertake include and preclude. And that is to say that one under-
stands what a bit of vocabulary means only insofar as one knows what difference
undertaking a commitment by its use would make to what else the one using it is
committed or entitled to—that is, insofar as one knows how to update a set of com-
mitments and entitlements in the light of adding one that would be expressed using
that vocabulary.”
“The updating process is highly sensitive to collateral commitments or beliefs.”
Brandom promises to show in the next (fourth) lecture how a “global updating
ability” results from “a collection of sub-abilities: as the capacity, in one’s actual
doxastic context, to associate with each commitment a range of counterfactual
robustness.”
Even though the dependence on context “can be algorithmically elaborated
from . . . abilities to discriminate ranges of counterfactual robustness,” Brandom
believes that such an algorithmic decomposition is not substantive, for it presup-
poses that we already are discursive creatures. In fact, explicit modal knowledge pre-
supposes this. And: “Being able to entertain semantic contents with that character
is being sapient.”
I could not agree more. But I would like to add that this semantic updating is
a fairly complex joint enterprise. In fact, science works on ‘the concept’ and is ‘con-
ceptual work’ in the following sense: We propose, and test, default, or normal, infer-
ences, and make them explicit in the form of verbal rules. These rules represent
empractical norms of (correct) inferences. These empractical norms are materially
(‘empirically’) grounded in joint generic experience (a word avoided by Brandom).
But they already count as conceptual, just because they are generic. We also always
develop these empractical norms, forms, or practices, because we always need
improvements of our particular projections of such generic truths and inferences
on situations, contexts, and cases. These particular empractical forms define Hegel’s
special category of Besonderheit. Making these forms of particular projections of
generic verbal rules onto the world of individual and joint experience explicit pre-
supposes a correct empractical use of these explications. In the end, there is no rela-
tion of linguistic forms to the world without the empractical category of competent
judgment.
As a result, it is grossly exaggerated to say that “any new information about any
object carries with it new information of a sort about every other object.” Our dif-
ferential and relational predicates are usually limited to certain domains in an
empractical way. In other words, there is no comprehensive universe of discourse
as a joint domain for variables, names, and objects. Second, an automaton cannot
deal autonomously with new generic knowledge of the category of particularity; it

79
cannot, because we judge about the default-norms of correct (empractical) appli-
cations of conceptual rules of verbal inferences in view of our interest in a good
development of cooperative experience and its generic articulation.
Nevertheless, doxastic updating is a most interesting aspect of our joint con-
ceptual (generic) knowledge. We must ask how we jointly develop and control this
updating. And I agree that the most important feature here is relevance and that “no
non-linguistic creature can be concerned with totally irrelevant properties ‘like frid-
geons or old-Provo eye colors.’” But an automaton might be programmed in a way
that it does not care for weird properties. Updating or developing (conceptual)
knowledge requires judgments about relevance. And this requires “the capacity to
ignore some factors one is capable of attending to.” But which ones? Any answer to
this question has to leave the level of ‘rational understanding’ (Verstand) in the
sense of following learned norms and rules behind. We rather have to turn to the
practice of good judgment and ‘experienced reason’, as I would like to translate the
German title(-word) “Vernunft.” My contention is that we develop in a joint insti-
tution of generic knowledge (or general science in the old comprehensive sense of
Latin scientia) systems of default inferences not only in a theoretical, verbal, and
explicit way, but also by developing empractical forms and norms of correct appli-
cation. By this we define ranges of counterfactual robustness of individual claims and
inferences and the ability to deal with them. Brandom’s own position seems to be
much weaker, in the end. For he only says: “. . . it is not plausible . . . that this ability
can be algorithmically decomposed into abilities exhibitable by nonlinguistic crea-
tures.” This is certainly true. But if we view robots as linguistic creatures, the inter-
esting question is why they do not have the required ability, at least not in the
comprehensive form needed.
The challenge of classy or utopian AI has been that human intelligence is taken
to be a kind of crossing between animal sentience and artificial intelligence. But
artificial intelligence as such is not ‘nonlinguistic’. Therefore, Brandom’s claim that
“nonlinguistic creatures have no semantic, cognitive, or practical access at all to
most of the complex relational properties” does not help us much. A good intelli-
gent system should have to “assess the goodness of many material inferences” and
work with some versions of counterfactual robustness. I agree with Brandom’s con-
tention that this faculty could not “be algorithmically elaborated from the sort of
abilities” those creatures (namely robots) do have. But I do not see the argument in
Brandom’s account. Does he want to say that robots do not grow on trees? Surely,
judgments of relevance cannot belong to a primitive, nondiscursive, level of facul-
ties. For they presuppose complex linguistic abilities. But what does this show above
the fact that we need a hierarchy of levels of reflection on symbols or language and
language use? The task would be to show that there is no intelligent formal system
that can do this trick, namely to talk about language and language use. I do not see
that Brandom has shown us anything about this problem.30
“What is at issue here is the capacity to assess the goodness of nonmonotonic
material inferences, by practically associating with each a range of counterfactual

80
robustness.” But once again it is only a claim to say: “This function is not com-
putable by substitution inferences, or even from the incompatibilities of sets of
nonlogical sentences. And that lack cannot be repaired by introducing a nonmo-
notonic logic, because what is at issue is the propriety of nonmonotonic material
inferences.”
For Brandom, the “issue is an empirical one, and these arguments are accord-
ingly probative rather than dispositive. They offer reasons supporting pessimism
about the algorithmic practical elaboration of the AI thesis, but not reasons oblig-
ing us to offer a negative verdict.” I think, we can do better, if we improve our
understanding of (a) the Rhodos principle, (b) the relation between technical AI
and UAI, (c) the distinction between contingent (or empirical) arguments and con-
ceptual (or generic) knowledge, (d) the fact that we always need good relevant
judgment in applying concepts to particular cases, and (e) what it means to reflect
on the scope and limits of rule-governed intelligent systems, especially with respect
to their relation to the real world and to our way of using languages.

VI. ON SECTIONS 5 AND 6: PRACTICAL ELABORATION


BY TRAINING, PEDAGOGICAL DECOMPOSITION,
AND PEDAGOGICAL POLITICS

In the last section, Brandom contrasts (a) verbal teaching and practical training
resp. (b) learning by talking and learning by doing or by being trained to do some-
thing at will.31 Moreover, Brandom distinguishes pedagogical decompositions of
teaching and training into a series of steps that, in the end, empirically lead to the
desired results in normal cases from algorithmic decomposition.
Considerations like these are interesting in their own right. For they ask if what
one person can learn or be trained to do is of the form that any other person can
learn it also. Guided by this question, we should organize our schools (Dewey).
Here it certainly is ‘only an empirical’ fact that for some persons to learn something
takes longer than for others, perhaps too long for not losing temper. For example,
there seems to be no fool-proof method of teaching subtraction n-m of arbitrary
(decimal) numbers n and m. Even though we have a computational decomposition
of the task, the problem obviously is that we are too slow and lazy to count from m
to n. Decimal subtraction, on the other hand, is a technique which presupposes
some short-term memory and discipline of keeping track of the ‘position-numbers’.
Therefore, it seems to be hard for some pupils to learn this method in adequate
time.
Brandom thinks that pedagogical decomposition, or for that matter, the
methodical order of learnabilities is “one of the master ideas animating the thought
of the later Wittgenstein.”32 I do not agree. But the real problem with this idea is
that it cannot be a matter of contingent fact what we can learn. When Wittgenstein

81
“emphasizes the extent to which our discursive practices are made possible by facts”
like the one that we can acquire a set of abilities in a series of steps, he talks about
generic forms, not about empirical facts in the narrow sense of frequent occurrences.
Therefore, he also does not talk about ‘all’ human beings.
On the other hand, the pedagogical problem of decomposition of subtraction
may be empirical and contingent in the following sense: there may just be too many
children, or too many teachers, who do not have the patience, and discipline,
needed for learning, or teaching, the trick. But this would lead us into another
debate, namely how to distinguish between what can be learned by training and/or
verbal teaching ‘in principle’ by anyone and what should count as a contingent fact
about ‘actual’ learnability, i.e. that many of us do not learn this and that, for what-
ever reasons, most of which have to do with speed, discipline, and patience.
Things get even more complicated, of course, when the task is to learn to freely
deal with analogies and metaphors, irony, and other figures of speech and with all
kinds of regular and open ‘abuses’ or new uses of language. I will, however leave
things at that, for these questions lead us into fairly new fields, far away from the
claims, and prospects, of artificial intelligence.

NOTES

1. Comments on Robert Brandom’s John Locke Lectures “Between Saying and Doing,” Lecture No.
3: “Artificial Intelligence and Analytic Pragmatism.”
2. We find this image of ourselves as rational animals nicely exposed in Donald Davidson’s paper
“Rational Animals,” reprinted in Davidson 1984.
3. Cf. M. Heidegger, Platons Lehre von der Wahrheit. Mit einem Brief über den Humanismus. (On
Humanism. Letter to Jean Beaufret), Bern 1947.
4. Brandom himself says: “This computational theory of the mind ( . . . ) is (indeed) . . . a view that
long antedates the advent of computers, having been epitomized already by Hobbes in his claim
that ‘reasoning is but reckoning.’”
5. Heidegger asks a similar question in his reactions to Edmund Husserl’s philosophical method of
‘epoch ’. This method tries to make things more explicit by bracketing, in a first step, all ‘usual’
knowledge and familiar ways of talking about ourselves, and to start afresh with a kind of rigorous
philosophy as a phenomenologically and logically controlled account of what we can do already.
6. Cf. Karl Bühler 1934 and Lorenzen 1986, p. 20, with Gilbert Ryle 1949 and his analysis of ‘know-
ing-how’.
7. I sometimes use the terms ‘title’ and ‘title-word’ for words that refer to aspects of our life and world
in a highly general way, like “action,” “practice,” or “language.” In German philosophical terminol-
ogy, name-like labels of this sort are also called “Reflexionstermini.” They appeal to our (joint)
experience, which determines the generic content of these labels, in an empractical way (cf. Bühler,
ibid., 28ff). Such terms refer to whole domains of knowing how to do things in a rather vague and
comprehensive way.
8. “—that is, of a vocabulary that is VP-sufficient to specify the practices-or-abilities PV-sufficient to
deploy the first vocabulary.”
9. An arithmetical function is defined via an equivalence relation between many different operations
or other representations ‘of the function’, as we say in retrospect. The domain in which the equiv-
alence relation is defined is the domain of all possible arithmetical descriptions of functions. Two
such descriptions are equivalent if and only if they lead to the same ‘course of values’, i.e. the same

82
‘extension’, which we can identify with a ‘set’ of pairs consisting of arguments and values ‘of the
function’.
10. In a sense, this can be seen as a deep side-result of Kurt Gödel’s second theorem, if understood in
a proper way. To show this in detail would need more space and time.
11. Cf. my “Formen der Anschauung. Eine Philosophie der Mathematik” (Berlin: de Gruyter, 2008).
12. I use the word “spontaneity” here in a ‘Kantian’ sense: There is another, quite contrary, sense in
which I do something spontaneously if I do it ‘just like this’, ‘by chance’, ‘unwillingly’, if I behave
‘without control’, so to speak.
13. Any academic discipline or science should refrain from fighting dead enemies. For this, we always
need historically informed judgments of relevance. They tell us if a question has to be treated as
still open and living, or if it has to count as settled. A different task is education, as it may be espe-
cially needed in the proverbial southern states of countries like the United States, Europe, or
Germany, for that matter.
14. Brandom’s analogy is not too helpful: “the relations between ‘belief ’, ‘desire’, ‘intention’, and ‘action’
might be modeled on the relations between ‘valve’, ‘fluid’, ‘pump’, and ‘filter’.”
15. Symbols of written language only seem to be just physical things; they usually get their “meaning”
or “content’ by representing possible oral interactions or by adding spoken comments, as Plato
famously saw.
16. No matter who invented the titles for Aristotle’s books on which grounds, metaphysics in its cor-
rect understanding is and always was (meta-level) ontology, i.e. logico-semantical reflection on
what it means to claim that something is the case.
17. To grasp these categorical distinctions requires a kind of sensitive taste for relevant and irrelevant
aspects. In the later case, judgments of equivalence define generic identities. Not to get confused
by vague analogies, metaphors, or vague ‘thought experiments’ depends on such scientific scrutiny
and logico-linguistic ‘accuracy’ in the sense of Bernard Williams or jointly controlled and dialec-
tically developed ‘rigor’ in the sense of Friedrich Kambartel. The ‘sincerity’ of merely subjective
beliefs and intuitive feelings of satisfaction are not enough.
18. A state of desire, which we share with animals (in German: mere Begierde) has to be understood
here in contradistinction to an already conceptually informed wish.
19. Neither the fact that computers as such do not have devices for perception and do not move (like
robots do), nor Searle’s appeal to sentience and his concept of ‘intentionality’ (as he calls the struc-
ture of animal desire) can be reasonably used as an argument against the claims of AI.
20. I agree with Brandom “that functional properties are not physical properties.” We are usually inter-
ested in particular representations of these ‘software-properties’ by some (efficiently working)
‘hardware’.
21. Not everybody is able to figure this purpose out, just as not everybody can know what a special
machine is good for. Therefore, not everybody can evaluate if something is a correct running
machine or a defected one, or a random generator, or if there is a human being hidden behind a
screen. I do not deny that a computer can manipulate symbols in ways that accord with the ‘mean-
ings’ we have given to them. But I totally disagree with John Haugeland’s slogan “that if the
automaton takes care of the syntax, the semantics will take care of itself.” We have to take care for
the syntax and the corresponding semantics, which usually transcends the schemes of merely for-
mal computations by far.
22. In order to sharpen the winning conditions against a computer, we certainly should say: If I know
that the answer given by the machine is false (because of her mode of operation), I certainly
should be the final winner.
23. Quite many debates in recent philosophy result from the trick to allow only partial knowledge to
a certain party and compare it with ‘our’ knowledge. If I see the barn and use the barn as a barn
it does not make any sense to doubt any longer that it is a ‘real barn’. Hence, Gettier’s (or Alvin
Goldman’s) problems just remind us of something we never should have forgotten, namely that
individual knowledge claims are always limited (finite) by the limited (finite) perspective of the
speaker.
24. What does it mean that ‘one cannot’ tell a robot from a human speaker? Only sufficiently intelli-
gent methods of approaching the Turing test should be taken into account. As Joseph Weizenbaum

83
shows in Computer Power and Human Reason: From Judgment to Calculation (W. H. Freeman &
Co. 1976) about the computer program Eliza, there often is some lack of intelligence on the side
of the persons who deal with so-called intelligent systems.
25. The diagonal argument is an easy way to reflect on the limitations of such a set of functions
defined by parameters. For if the function g(x) is defined to be f(x,x)+1, we know that for no y
the function fy represents g. This shows that we can define g, but the machine cannot. Hence, we
can ask the machine to calculate g. And we are sure that it takes only finite time until we come to
a place where the answer of the machine must be wrong.
26. If I know the function defining the machine and how the parameters are set, I can, in principle,
predict the answers of the machine to my ‘questions’ (arguments).
27. Insofar as deductions from axioms according to formal rules of inference turn into real proofs
only via interpretations of the quantifiers in (usually half-formal) semantic models, we must dis-
tinguish formalist algebra and real algebra, formalist arithmetic and real arithmetic, formalist
geometry and real geometry, formalist set theory and real set theory. Only then we see why a cer-
tain formalist branch of inferentialism is misleading already as an understanding of mathematics
and proofs, not to speak of language and arguments.
28. One way of investigating the sort of PP-necessity relations involved in the requirement that a
practical algorithmic decomposition be substantive is to look at organisms that have been dam-
aged in various ways, for instance, by localized strokes or bullets in their brains. Thus, we might
find out that a whole suite of abilities comes together—that, as a matter of empirical fact, anyone
who has lost the ability to do A will also have lost the ability to do B. What we can tell on a priori
grounds is only that there must be some things that we can just do, without having to do them by
doing something else, by algorithmically elaborating other abilities. This is compatible with there
being clusters of abilities such that we can do some of the component things by doing some—pos-
sibly conditioned—sequence of others.
29. “That would be something that is PV-necessary for deploying any autonomous vocabulary (or
equivalently, PP-necessary for any ADP) that cannot be algorithmically decomposed into prac-
tices for which no ADP is PP-necessary.”
30. Brandom says: “We have no idea at all how even primitive non-discursive abilities that could be
substantively algorithmically elaborated into the capacity to form the complex predicates in ques-
tion could be further elaborated so as to permit the sorting of them into those that do and those
that do not belong in the range of counterfactual robustness of a particular inference.”
31. Training is “a course of experience, in Hegel’s and Dewey’s sense (Erfahrung rather than Erlebnis)
of a feedback loop of perception, responsive performance, and perception of the results of the per-
formance.” I would like to add that responsive performance must be performed at will. I.e., we
have not only to learn to respond properly (according to some norms defining normality), but
also not to respond properly, for example when we want to show that it is up to us how we decide
to respond. This is the reason why irony and all other free tropes of discourse are so important:
They show that any speech act has the characteristic of a free action.
32. “Wittgenstein sees this sort of non-algorithmic practical elaboration as ubiquitous and pervasive,”
says Brandom.

REFERENCES

Bar, Hillel Y. 1964. Language and Information: Selected Essays on Their Theory and Application.
Reading, Mass.: Addison-Wesley.
Brandom, R. B. 1994. Making It Explicit: Reasoning, Representing and Discursive Commitment.
Cambridge, Mass.: Harvard University Press.
Brandom, R. B. 2000. Articulating Reason: An Introduction to Inferentialism. Cambridge, Mass.:
Harvard University Press.
Bühler, Karl. 1982 [1934]. Sprachtheorie: Die Darstellungsfunktion der Sprache. Jena: Verlag Gustav
Fischer.

84
Cresswell, M. J. 1973. Logics and Languages. London: Methuen.
Carnap, R. 1942. Introduction to Semantics. Studies in Semantics I. Cambridge, Mass.: Harvard
University Press.
Chomsky, N. 1957. Syntactic Structures. Den Haag: The Hague Mouton.
Chomsky, N. 1980. Rules and Representations. New York: Columbia University Press.
Davidson, D. 1970. “Semantics for Natural Languages.” In Linguaggi nella Società e nella Tecnica, ed.
B. Visentini et al. Milano: Edizioni di Communita.
Davidson, D. 1980. Essays on Actions and Events. Oxford: Oxford University Press.
Davidson, D. 1984. Inquiries into Truth and Interpretation. Oxford: Oxford University Press.
Davidson, D., and G. Harman, eds. 1972. Semantics of Natural Language. Dordrecht: Reidel.
Dennett, D. 1988. The Intentional Stance. Cambridge, Mass.: MIT Press.
Dummett, M. A. E. 1975. “What Is a Theory of Meaning.” In Mind and Language, ed. Guttenplan,
97–138. Oxford: Clarendon Press.
Dummett, M. A. E. 1976. “What Is a Theory of Meaning II.” In Truth and Meaning: Essays in
Semantics, ed. Evans and McDowell. Oxford: Clarendon Press.
Etchemendy, J. 1990. The Concept of Logical Consequence. Cambridge, Mass.: Harvard University
Press.
Evans, G. 1982. The Varieties of Reference. Oxford: Clarendon Press.
Evans, G., and J. McDowell, eds. 1976. Truth and Meaning: Essays in Semantics. Oxford: Clarendon
Press.
Feigl, H., and W. Sellars, eds. 1949. Readings in Philosophical Analysis. New York: Appleton-Century-
Croft.
French, P., T. Uehling, and H. Wettstein, eds. 1979. Contemporary Perspectives in the Philosophy of
Language. Minneapolis: University of Minnesota Press.
Gödel, K. 1930. “Die Vollständigkeit der Axiome des logischen Funktionen-Kalküls”; Monatshefte für
Mathematik und Physik 38: 213–42.
Gödel, K. 1931. Über Formal Unentscheidbare Sätze der Principia Mathematica und Verwandter
Systeme”; Monatshefte für Mathematik und Physik 38: 173–98.
Goldman, A. 1978. “Discrimination and Perceptual Knowledge.” Journal of Philosophy 73, 20: 771–91.
Grover, D., J. Camp, and N. Belnap. 1975. “A Prosentential Theory of Truth.” Philosophical Studies 27:
73–125.
Guttenplan, S., ed. 1975. Mind and Language. Wolfson College Lectures 1974. Oxford: Clarendon
Press.
Hahn, L. W., and P. A. Schilpp, eds. 1986. The Philosophy of W. V. Quine. La Salle, Ill.: Open Court.
Halmos, P. R. 1960. Naive Set Theory. Princeton: Princeton University Press.
Horwich, P. 1990. Truth. Oxford: Blackwell.
Kambartel, F. 1977/78. “Symbolic Acts.” In Contemporary Aspects of Philosophy, ed. G. Ryle, 70–85.
Stocksfield: Oriel Press.
Kambartel, F. 2000. “Strenge und Exaktheit. Über die Methode von Wissenschaftund Philosophie.” In
Formen der Argumentation, ed. G.-L. Lueken, 75–85. Leipzig: University of Leipzig Press.
Lewis, D. 1983. Philosophical Papers I. Oxford: Oxford University Press.
Lewis, D. 1984. On the Plurality of Worlds. Oxford: Oxford University Press.
Lorenz, K., and P. Lorenzen. 1978. Dialogische Logik. Darmstadt: Wiss. Buchg.
Lorenzen, P. 1987. Lehrbuch der Konstruktiven Wissenschaftstheorie. Mannheim: Bibl. Inst.
Lueken, G.-L. 2000. Formen der Argumentation. Leipzig: University of Leipzig Press.
McDowell, J. 1994. Mind and World. Cambridge, Mass.: Harvard University Press.
McDowell, J. 1998. Meaning, Knowledge and Reality. Cambridge, Mass.: Harvard University Press.
Millikan, R. 1984. Language, Thought, and Other Biological Categories. Cambridge, Mass.: MIT Press.
Montague, R. 1974. Formal Philosophy, ed. R. Thomason. New Haven, Conn.: Yale University Press.
Peregrin, J. 1995. Doing Worlds with Words: Formal Semantics Without Formal Metaphysics. Dordrecht:
Kluwer.
Peregrin, J. 2001. Meaning and Structure: Structuralism of Postanalytic Philosophers. Aldershot:
Ashgate.
Putnam, H. 1975b. Mathematics, Matter and Method. Cambridge: Cambridge University Press.
Putnam, H. 1975b. Mind, Language and Reality: Philosophical Papers, vol. 2. Cambridge: Cambridge
University Press.
Quine, W .V. O. 1960. Word and Object. Cambridge, Mass: MIT Press.

85
Rorty, R., ed. 1967/1992. The Linguistic Turn: Recent Essays in Philosophical Method. Chicago:
University of Chicago Press. 1–39.
Rosenberg, J. F. 1994. Beyond Formalism. Philadelphia: Temple University Press.
Ryle, G. 1949. The Concept of Mind. London: Hutchinson & Co.
Searle, J. R. 1983. Intentionality: An Essay in the Philosophy of Mind. Cambridge: Cambridge
University Press.
Sellars, W. 1963. Science, Perception, and Reality. London: Routledge & Kegan Paul.
Sellars, W. 1974. “Meaning as Functional Classification: A Perspective on the Relation of Syntax to
Semantics.” Synthese 27: 417–37.
Sellars, W. 1980. Pure Pragmatics and Possible Worlds: The Early Essays of W. Sellars, ed. J. Sicha.
Atascadero: Ridgeview.
Sellars, W. 1997. “Empiricism and the Philosophy of Mind.” Study Guide by R. B. Brandom.
Cambridge, Mass.: Harvard University Press (cf. also Sellars 1963).
Stekeler-Weithofer, P. 1986. Grundprobleme der Logik: Elemente einer Kritik der formalen Vernunft.
Berlin: de Gruyter.
Strawson, P. F. 1968. Studies in the Philosophy of Thought and Action. Oxford: Oxford University Press.
Williams, B. 2002. Truth and Truthfulness: An Essay in Genealogy. Princeton: Princeton University
Press
Wittgenstein, L. 1921. Tractatus logico-philosophicus: Annalen der Naturphilosophie; Werke, Bd. 1.
Frankfurt a. M. 1984: 1989.
Wittgenstein, L. 1953. Philosophical Investigations / Philosophische Untersuchungen. Oxford: Blackwell
and Frankfurt / M. 1984: Suhrkamp.
Wright, C., ed. 1984. Frege: Tradition and Influence. Oxford: Blackwell.
Ziff, P. 1960 Semantic Analysis. Ithaca, N.Y.: Cornell University Press.

86
PHILOSOPHICAL TOPICS
VOL. 36, NO. 2, FALL 2008

Will There Be Blood? Brandom, Hume,


and the Genealogy of Modals

Huw Price
University of Sydney

I. THE GENEALOGY OF MODALS

Robert Brandom begins Lecture 4 with the suggestion that modality is problematic
for empiricism but not for naturalism.
The status and respectability of alethic modality was always a point of
contention and divergence between naturalism and empiricism. It poses
no problems in principle for naturalism, since modal vocabulary is an
integral part of all the candidate naturalistic base vocabularies. Funda-
mental physics is, above all, a language of laws; the special sciences dis-
tinguish between true and false counterfactual claims; and ordinary
empirical talk is richly dispositional. By contrast, modality has been a
stumbling block for the empiricist tradition ever since Hume forcefully
formulated his epistemological, and ultimately semantic, objections to
the concepts of law and necessary connection. (2008, 93)

Associating Hume’s challenge to the status of modality with his empiricism rather
than his naturalism, Brandom goes on to suggest the late twentieth century’s rejec-
tion of empiricism’s semantic atomism then clears the way for the modal revolution.
It seems to me that this way of reading the history misses an important ingre-
dient in Hume’s treatment of modality, namely, Hume’s interest in what might be
called the genealogy of modality. This project has the following key features, in my
view:

87
1. While it may be motivated for Hume by empiricism, it doesn’t depend on
that motivation, and stands alone as a project for the philosophical
understanding of modality—a project in one sense entirely within the
scope of a well-motivated philosophical naturalism.
2. Despite its naturalistic credentials, it represents a profound challenge to
the view of modality reflected in Brandom’s (accurate) characterization
of the attitude of many contemporary naturalists.
3. It is a precursor of Brandom’s own project, as described in this lecture;
with the result that his project, too, represents a powerful challenge to
this same form of contemporary naturalism (the kind that “takes modal-
ity for granted,” as it were).

The genealogy of modality is the project of understanding how creatures in our sit-
uation come to go in for modal talk—how we get in the business of using modal
vocabulary and thinking modal thoughts. In Hume, as in Brandom, this is supposed
to be a thoroughly naturalistic inquiry, in the sense that it takes for granted that we
ourselves are natural creatures, without supernatural powers. The interesting thing
is that this naturalistic attitude to modal talk—to the use of modal vocabulary—can
easily challenge a naturalistic view of the subject matter, if by this we mean the view
that modality itself is part of the furniture of the natural world.1
If this seems puzzling, think of the normative analogy. The project of a geneal-
ogy of normative and evaluative notions is to explain why natural creatures like us
go in for using normative concepts and vocabulary. It is naturalistic in the sense
that it regards us and our vocabularies as part of the natural order, but not (or at
least not necessarily) in the sense that it regards norms and values as part of our
natural environment. Many contemporary philosophers seem to find the point
much easier to grasp in the normative than the modal case, a fact which no doubt
reflects, in part, the role that modal notions have come to play in the foundations
of contemporary philosophy. Hume’s expressivist genealogy is a threat to the status
of those foundations, in a way which may seem to have no parallel in the norma-
tive case.
Another factor is the one noted in Brandom’s remarks above: physics seems
modal through and through. In my view, however, any naturalist worth her salt
ought to be cautious of this kind of appeal to physics. We can grant that physics as
it stands is irreducibly modal, without simply throwing in the towel on the ques-
tion as to whether this should be taken as reflecting the way the physical world is
independently of us, or a deeply entrenched aspect of the way in which creatures in
our situation need to conceptualize the world. Of course, these are deep and diffi-
cult questions—it isn’t immediately clear that there’s a standpoint available to us
from which to address them. But this difficulty is no excuse for giving up the
game—for simply assuming, in effect, that physics views the world through per-
fectly transparent lenses. (Where better than physics to remember our Copernican
lessons? )

88
The point I want to stress is that a plausible naturalistic genealogy for modal
discourse might count heavily on one side of these issues. In effect, it might tell us
why creatures in our situation would be led to develop a modal physics, even if they
inhabited a nonmodal world (perhaps a Hume world, as contemporary metaphysi-
cians sometimes say). As I’ve said, such a genealogy would be naturalistic in the
Humean sense, which puts the emphasis on the idea that we humans (and our
thought and talk) are part of the natural order. But many mainstream contempo-
rary philosophers, keen to think of themselves as naturalists, would be discom-
forted by it. Why? Because it would challenge the idea that the function of modal
vocabulary is to keep track of distinctively modal features of the natural world, and
challenge the project of resting realist metaphysics on modal foundations.
Thus I think that Brandom’s presentation of his own genealogical project in
this lecture obscures both some enemies and some allies. Its enemies, as I’ve already
said, are the people in contemporary philosophy who assume a more robust, meta-
physical approach to modality. There isn’t an easy accommodation between
Brandom’s project, on the one side, and naturalism as widely commonly and meta-
physically conceived, on the other. And the tension is one of the most important in
contemporary philosophy, for the reasons Brandom himself puts his finger on: the
role of modality in the revolution that swept through analytic philosophy in the last
third of the twentieth century.
Brandom thus has a bigger fight on his hands than he realizes, I think. But he’s
on the side of (Humean) virtue, in my view, and should be leading the charge—
leading the pragmatists’ campaign against modal metaphysics. The best reason for
optimism about the outcome of the campaign is that we pragmatists hold the nat-
uralistic high ground, and cannot be dislodged—at least, not without dislodging
Darwin, too, for he’s the rock on which we stand. The second-order, naturalistic
reflection on the origin of our vocabularies always trumps, in principle, the unre-
flective first-order intuitions which merely exercise those vocabularies.
As for Brandom’s allies, they are Hume and his expressivist descendants, fellow
travelers in the quest for a pragmatist genealogy of modal idioms. Here are some
leading lights, concerning a mixed bag of modal notions: Ramsey (1926) and other
subjectivists about probability; Ramsey (1929) again, about causation and laws;
Ryle (1950), whom Brandom mentions, about laws and conditionals; Wittgenstein
(1981), about claims of necessity; various advocates of the project of understand-
ing causation in terms of manipulation (e.g., Collingwood [1940], von Wright
[1971], Gasking [1955], and Menzies and Price [1993]); and Simon Blackburn
(1993), of course, who has done more than anyone else in recent years to defend a
kind of modest Humean expressivism (stressing in particular the parallel between
the moral and the modal cases). Common to all these writers, as to Brandom, is a
concern to explain one or other of the modal notions in terms of what we do with
them, what practical role they play in our lives, rather than in metaphysical terms.
This brings me to the more general issue I want to raise, viz., Brandom’s some-
what ambiguous attitude to ontology and metaphysics. In general, pragmatists have

89
not been shy about expressing some antipathy to metaphysics. Here again, Hume—
an exemplary genealogical pragmatist, in the present sense—is a shining example,
known for his remarks about committing volumes of school metaphysics to the
flames. More recently, one thinks of Wittgenstein’s dismissal of modal realism as
‘the slightly hysterical style of university talk’ (1981, §299); and of Ryle’s remark
that being a professor of metaphysics was like being a professor of tropical dis-
eases—in both cases, Ryle said, the aim was to eradicate the subject matter, not to
promote it.
We don’t find this kind of anti-metaphysical attitude in Brandom. Rather, we
find what looks to me to be a degree of ambiguity, or uncertainty, about the goals
of his project, with respect to what are traditionally treated as metaphysical ques-
tions. I think that Brandom hasn’t seen clearly the importance of a distinction
which is marked relatively sharply in the Humean tradition, between two views of
the philosopher’s project. In the remainder of the paper I want to say something
about this distinction, and offer some textual evidence that Brandom hasn’t prop-
erly faced up to the need to take a stand on it, on one side or other.

II. THE LESSONS OF HUMEAN EXPRESSIVISM

Expressivist views (in what I’m taking to be the Humean sense) are often responses
to what are now called ‘location’ or ‘placement’ problems.2 Initially, these present as
ontological or perhaps epistemological problems, within the context of some broad
metaphysical or epistemological program: empiricism, say, or physicalism. By the
lights of the program in question, some of the things we talk about seem hard to
‘place’, within the framework the program dictates for reality or our knowledge of
reality. Where are moral facts to be located in the kind of world described by
physics? Where is our knowledge of causal necessity to go, if a posteriori knowledge
is to grounded on the senses?
The expressivist solution is to move the problem cases outside the scope of the
general program in question, by arguing that our tendency to place them within its
scope reflects a mistaken understanding of the vocabulary associated with the mat-
ters in question. Thus the (apparent) location problem for moral or causal facts was
said to rest on a mistaken understanding of the function of moral or causal lan-
guage. Once we note that this language is not in the business of ‘describing reality’,
says the expressivist, the location problem can be seen to rest on a category mistake.
Note that traditional expressivism thus involves both a negative and a positive
thesis about the vocabularies in question. The negative thesis was that these vocab-
ularies are not genuinely representational, and traditional expressivists here took
for granted that some parts of language are genuinely representational (and, implic-
itly, that this was a substantial theoretical matter of some sort). As many people have
pointed out, this thesis is undermined by deflationism about the semantic notions

90
on which it rests. Less commonly noted is the fact that deflationism leaves entirely
intact the expressivists’ positive thesis, which proposes some alternative expressive
account of the function of each vocabulary in question. As I’ve argued elsewhere
(Macarthur and Price 2007), this kind of positive thesis not only survives deflation
of the negative thesis by semantic minimalism; it actually wins by default, in the
sense that semantic deflationism requires some nonrepresentational account of the
functions of the language in question—in other words, it ensures that the positive
work of theorizing about the role and functions of the vocabularies in question has
to be conducted in nonsemantic or nonreferential terms.
What’s happening at this point on the metaphysical side—i.e., to those onto-
logical issues that expressivism originally sought to evade? Note, first, that tradi-
tional expressivism tended to be an explicitly antirealist position, at least in those
versions embedded in some broader metaphysical program. In ethics, for example,
noncognitivism was seen as a way of making sense of the language of morals, while
denying that there are really any such things as moral values or moral facts. But this
was always a little problematic: if moral language was nondescriptive, how could it
be used to make even a negative ontological claim? Better, perhaps, to say that the
traditional metaphysical issue of realism versus antirealism is simply ill-posed—an
attitude to metaphysics that has long been in play, as Carnap makes clear:
Influenced by ideas of Ludwig Wittgenstein, the [Vienna] Circle rejected
both the thesis of the reality of the external world and the thesis of its
irreality as pseudo-statements; the same was the case for both the thesis
of the reality of universals . . . and the nominalistic thesis that they are
not real and that their alleged names are not names of anything but
merely flatus vocis. (1950, 252)

Famously, Carnap recommends this kind of metaphysical quietism quite generally,


and this is surely a desirable stance for an expressivist, especially when semantic
minimalism deflates what I called the expressivist’s negative thesis. An expressivist
wants to allow that as users of moral language, we may talk of the existence of val-
ues and moral facts, in what Carnap would call an internal sense. What’s important
is to deny that there is any other sense in which these issues make sense. Here
Carnap is a valuable ally.
So construed, expressivism simply deflates the traditional ontological ques-
tions—it sets them aside, aiming to cure us of the urge to ask them, as Wittgenstein
or Ryle might put it. In their place, it offers us questions about the role and geneal-
ogy of vocabularies. These are questions about human behavior, broadly construed,
rather than questions about some seemingly puzzling part of the metaphysical
realm. So expressivism isn’t a way of doing metaphysics in a pragmatist key. It is a
way of doing something like anthropology (by which for present purposes I mean a
small but interesting sub-speciality of biology). Hence my Humean slogan: biology
not ontology.

91
III. BRANDOM AND METAPHYSICS

Where does Brandom stand with respect to this distinction between ontology and
biology, metaphysics and anthropology? My impression is that he sometimes tries
to straddle the divide, or at least doesn’t sufficiently distinguish the two projects.
This is a large topic, deserving a more detailed treatment elsewhere, but I want to
sketch some reasons in support of this assessment.3
On the one hand, as I have already noted, Brandom often writes as if his proj-
ect is metaphysical, in the present sense—as if he is concerned to give us an account
of the nature and constitution of particular items of philosophical interest, such as
conceptual content and the “representational properties” of language:
The primary treatment of the representational dimension of conceptual
content is reserved for Chapter 8 . . . [where] the representational prop-
erties of semantic contents are explained as consequences of the essen-
tially social character of inferential practice. (1994, xvii)

On the face of it, this is a metaphysical stance: it is concerned with representational


properties, after all. And at a more general level, consider this:
[T]he investigation of the nature and limits of the explicit expression in
principles of what is implicit in discursive practices yields a powerful
transcendental argument—a . . . transcendental expressive argument for
the existence of objects. (1994, xxii–xxiii)

On the other hand, Brandom often makes it clear that what is really going on is
about the forms of language and thought, not about extra-linguistic reality as such.
The passage I have just quoted continues with the following gloss on the transcen-
dental argument in question: it is an “argument that (and why) the only form the
world we can talk and think of can take is that of a world of facts about particular
objects and their properties and relations” (1994, xxii–xxiii, latter emphasis mine).
Similarly, at a less general level, Brandom often stresses that what he is offer-
ing is primarily an account of the attribution of terms—‘truth’, ‘reference’, ‘repre-
sents’, etc.—not of the properties or relations that other approaches take those
terms to denote. Concerning his account of knowledge claims, for example, he says:
Its primary focus is not on knowledge itself but on attributions of
knowledge, attitudes towards that status. The pragmatist must ask, What
are we doing when we say that someone knows something? (1994, 297,
latter emphasis mine)

But a few sentences later, continuing the same exposition, we have this: “A pragma-
tist phenomenalist account of knowledge will accordingly investigate the social and
normative significance of acts of attributing knowledge” (1994, 297, my emphasis).
Here, the two stances are once again run together: to make things clear, a pragma-
tist should deny that he is offering an account of knowledge at all. (That’s what it
means to say that the project is biology, not ontology.)4

92
Another point in Brandom’s favor (from my Humean perspective) is that he
often makes it clear that he rejects a realist construal of reference relations. Thus,
concerning the consequences of his preferred anaphoric version of semantic defla-
tionism, he writes:
One who endorses the anaphoric account of what is expressed by ‘true’
and ‘refers’ must accordingly eschew the reifying move to a truth prop-
erty and a reference relation. A line is implicitly drawn by this approach
between ordinary truth and reference talk and variously specifically
philosophical extensions of it based on theoretical conclusions that have
been drawn from a mistaken understanding of what such talk expresses.
Ordinary remarks about what is true and what is false and about what
some expression refers to are perfectly in order as they stand; and the
anaphoric account explains how they should be understood. But truth
and reference are philosophers’ fictions, generated by grammatical mis-
understandings. (1994, 323–24)

Various word-world relations play important explanatory roles in the-


oretical semantic projects, but to think of any one of these as what is
referred to as “the reference relation” is to be bewitched by surface syn-
tactic form. (1994, 325)

On the other hand, Brandom’s strategy at this point suggests that in some ways
he is still wedded to a traditional representational picture. Consider, in particular,
his reliance on syntactic criteria in order to be able to deny, as he puts it,
that claims expressed using traditional semantic vocabulary make it
possible for us to state specifically semantic facts, in the way that claims
expressed using the vocabulary of physics, say, make it possible for us to
state specifically physical facts. (1994, 326)

Here Brandom sounds like a traditional expressivist, who is still in the grip of the
picture that some parts of language are genuinely descriptive, in some robust sense.
He hasn’t seen the option and attractions of allowing one’s semantic deflationism
to deflate this picture, too; and remains vulnerable to the slide to metaphysics,
wherever the syntactical loophole isn’t available.
This reading is born out by the fact that at certain points he makes to confront
these traditional metaphysical issues head-on. “None of these is a naturalistic
account,” he says (2000, 27), referring to various aspects of his account of the refer-
ential, objective, and normative aspects of discourse. And again:
Norms . . . are not objects in the causal order. . . . Nonetheless, accord-
ing to the account presented here, there are norms, and their existence
is neither supernatural nor mysterious. (1994, 626)

Once again, this passage continues with what is by my lights exactly the right expla-
nation of what keeps Brandom’s feet on the ground: “Normative statuses are
domesticated by being understood in terms of normative attitudes, which are in the
causal order” (1994, 626). But my point is that he shouldn’t have to retreat in this

93
way in the first place. His account only looks non-naturalistic because he tries to
conceive of it as metaphysics. If he had stayed on the virtuous (anthropological)
side of the fence, there would have been no appearance of anything non-naturalis-
tic, and no need to retreat. (Rejecting the traditional naturalist/non-naturalist
debate is of a piece with rejecting the realist/antirealist debate.)
I have one final example, which seems to me to illustrate Brandom’s continu-
ing attraction to what I am thinking of as the more representationalist side of the
fence—the side where we find the project of reconstructing representational rela-
tions using pragmatic raw materials. It is from Brandom’s closing lecture in the
present series, and is a characterization he offers of his own project, in response to
the following self-posed challenge: “Doesn’t the story I have been telling remain too
resolutely on the ‘word’ side of the word/world divide?” He replies:
Engaging in discursive practices and exercising discursive abilities is
using words to say and mean something, hence to talk about items in
the world. Those practices, the exercise of those abilities, those uses,
establish semantic relations between words and the world. This is one of
the big ideas that traditional pragmatism brings to philosophical
thought about semantics: don’t look, to begin with, to the relation
between representings and representeds, but look to the nature of the
doing, of the process, that institutes that relation. (2008, 177–78)

I have been arguing that the right course—and the course that Brandom actually
often follows, in practice—is precisely to remain “resolutely on the ‘word’ side of the
word/world divide.” This resolution doesn’t prevent us from seeking to explain ref-
erential vocabulary—the ordinary ascriptions of semantic relations, whose perva-
siveness in language no doubt does much to explain the attractiveness of the
representational picture. Nor does it require, absurdly, that we say nothing about
word–world relations. On the contrary, as Brandom himself points out in a remark
I quoted above:
Various word-world relations play important explanatory roles in the-
oretical semantic projects, but to think of any one of these as what is
referred to as “the reference relation” is to be bewitched by surface syn-
tactic form. (1994, 325)

Biological anthropologists will have plenty to say about the role of the natural envi-
ronment in the genealogy and functions of vocabularies. But the trap they need to
avoid is that of speaking of “semantic relations between words and the world,” in
anything other than a deflationary tone. For once semantic relations become part
of the biologists’ substantial theoretical ontology, so too do their relata, at both ends
of the relation. The inquiry becomes committed not merely to words, but to all the
things to which it takes those words to stand in semantic relations—to norms, val-
ues, numbers, causes, conditional facts, and so on: in fact, to all the entities which
gave rise to placement problems in the first place. At this point, expressivism’s hard-
won gains have been thrown away, and the subject has become infected once more
with metaphysics. That’s why it is crucial that my (biological) anthropologists

94
should remain semantic deflationists, in my view, and not try to recover substan-
tial semantic relations, even on pragmatic foundations.
In calling the possibility of this kind of liberation from metaphysics an insight
of Humean expressivism, I don’t mean, of course, to belittle the respects in which
pragmatism has moved on from Hume. Brandom notes that Wilfred Sellars char-
acterized his own philosophical project as that of moving analytic philosophy from
its Humean phase to a Kantian phase, and glosses the heart of this idea as the view
that traditional empiricism missed the importance of the conceptual articulation
of thought. Rorty, in turn, has described Brandom’s project as a contribution to the
next step: a transition from a Kantian to an Hegelian phase, based on recognition
of the social constitution of concepts, and of the linguistic norms on which they
depend. For my part, I’ve urged merely that Brandom’s version of this project is in
need of clarity on what I think it is fair to describe as a Humean insight. Hume’s
expressivism may well be a large step behind Kant, in failing to appreciate the
importance of the conceptual; and a further large step behind Hegel, in failing to
see that the conceptual depends on the social. But it is still at the head of the field
for its understanding of the way in which what we would now call pragmatism sim-
ply turns its back on metaphysics.

IV. THERE WILL BE BLOOD

By my lights, then, Brandom’s attitude to metaphysics seems excessively irenic. I


want to follow Hume, Ramsey, Ryle, Wittgenstein, and Blackburn, in dismissing, or
at best deflating, large parts of that discipline. Whereas Brandom—though engaged
in fundamentally the same positive inquiry, the same pragmatic explanatory proj-
ect—seems strangely reluctant to engage with the old enemy.
Nowhere is this difference more striking than in the case of modality. In my
view, modality is the soft under-belly of contemporary metaphysics: the belly,
because as Brandom himself notes in Lecture 4, so much of what now passes for
metaphysics rests on it, or is nourished by it; and soft, because it is vulnerable to
attack from precisely the direction to which the subject itself is most keen to be
most receptive, that of naturalism. It seems to me that Brandom’s treatment of
modality provides precisely the tools required to press this advantage—precisely
the sharp implements we need to make mincemeat of modern metaphysics. Hence
my puzzlement, at his reluctance to put them to work.5
I had planned to end there, but the story is a little more complicated. Modern
metaphysics turns out to have two under-bellies, both of them soft—a fact which
underlines what a strange and vulnerable beast it is, in my view. The second belly
is ‘representationalism’—the fact that much of the subject is built on appeals to ref-
erence, and other robust semantic notions. Here, too, as I’ve said, I read Brandom
as a somewhat ambiguous ally of the traditional pragmatist attack. On the one

95
hand, he offers us profound new insights into how to do philosophy in another key;
on the other hand, as the remark I quoted from Lecture 6 indicates, he sometimes
seems to want to get out of it some pragmatist substitute for platonic representa-
tion—some surgery which would reconstruct the referential belly of the beast, as it
were, in a new and healthy form.6 Once again, I think that that’s the wrong move.
The two-bellied beast should simply be put out of its misery, and no one is better
placed than Brandom to administer the coup de grâce.

ACKNOWLEDGMENTS

This is a lightly edited version of my comments on Lecture 4 of Brandom’s Locke


Lectures, as delivered in Prague in April 2007. I am grateful to Bob Brandom and
Jarda Peregrin for inviting me to take part in the Prague meeting, and to the
Australian Research Council and the University of Sydney, for research support.

NOTES

1. This is an example of how subject naturalism can challenge object naturalism, as I put it elsewhere
(Price 2004).
2. See Jackson (1998), for example, for this usage.
3. This section draws extensively on Price (2010a). I am grateful to the editor and publisher of that
volume for permission to re-use this material here.
4. It might seem that I am being uncharitable to Brandom here, taking too literally his claim to be
giving an account of knowledge (and similar claims about other topics). By way of comparison,
isn’t it harmless to say, at least loosely, that disquotationalism is an account of truth, even though
it isn’t literally an account of truth, but rather of the functions of the truth predicate? But I think
there are other reasons for taking Brandom to task on this point—more on this in a moment.
5. Like Prague itself, this is no country for vegetarians.
6. See Price (2010a) and (2010b, Ch. 1) for more on this theme.

REFERENCES

Blackburn, S. 1993. Essays in Quasi-Realism. New York: Oxford University Press.


Brandom, R. 1994. Making It Explicit. Cambridge, Mass.: Harvard University Press.
Brandom, R. 2000. Articulating Reasons: An Introduction to Inferentialism. Cambridge, Mass.: Harvard
University Press.
Brandom, R. 2008.Between Saying and Doing: Towards an Analytic Pragmatism. Cambridge, Mass.:
Harvard University Press.
Carnap, R. 1950. “Empiricism, Semantics and Ontology.” Revue Internationale de Philosophie 4:
20–40; reprinted in Philosophy of Mathematics, ed. Paul Benacerraf and Hilary Putnam, 241–57.
Cambridge, Mass.: Cambridge University Press, 1983. (Page references are to the
latter version.)
Collingwood, R. 1940. An Essay on Metaphysics. Oxford: Clarendon Press.

96
Gasking, D. 1955. “Causation and Recipes.” Mind 64: 479–87.
Jackson, F. 1998. From Metaphysics to Ethics. Oxford: Clarendon Press.
Macarthur, D., and H. Price. 2007. “Pragmatism, Quasi-realism and the Global Challenge.” In The
New Pragmatists, ed. Cheryl Misak, 91–120. Oxford: Oxford University Press.
Menzies, P., and H. Price. 1993. “Causation as a Secondary Quality.” British Journal for the Philosophy
of Science 44: 187–203.
Price, H. 2004. “Naturalism Without Representationalism.” In Naturalism in Question, ed. David
Macarthur and Mario de Caro, 71–88. Cambridge, Mass.: Harvard University Press.
Price, H. 2010a. “One Cheer for Representationalism?” In The Philosophy of Richard Rorty (Library of
Living Philosophers 32), ed. R. Auxier. Chicago: Open Court.
Price, H. 2010b. Naturalism Without Mirrors. New York: Oxford University Press.
Ramsey, F. P. 1926. “Truth and Probability.” In Philosophical Papers, ed. D. H. Mellor, 52–94.
Cambridge: Cambridge University Press.
Ramsey, F. P. 1929. “General Propositions and Causality.” In Philosophical Papers, ed. D. H. Mellor,
145–63. Cambridge: Cambridge University Press.
Ramsey, F. P. 1990. Philosophical Papers, ed. D. H. Mellor. Cambridge: Cambridge University Press.
Ryle, G. 1950. ‘‘‘If ’, ‘So’, and ‘Because’,’’ in Philosophical Analysis, ed. M. Black. Ithaca, N.Y.: Cornell
University Press.
von Wright, G. 1971. Explanation and Understanding. Ithaca, N.Y.: Cornell University Press.
Wittgenstein, L. 1981. Zettel, ed. G. E. M. Anscombe and G. H. von Wright and trans. G. E. M.
Anscombe, 2nd edition. Oxford: Blackwell.

97
PHILOSOPHICAL TOPICS
VOL. 36, NO. 2, FALL 2008

Brandom’s Incompatibility Semantics

Jaroslav Peregrin
Prague

I. WHY FORMAL SEMANTICS?

Formal semantics is an enterprise which accounts for meaning in formal, mathe-


matical terms, in the expectation of providing a helpful explication of the concept
of the meaning of specific word kinds (such as logical ones), or of words and
expressions generally. Its roots go back to Frege, who proposed exempting concepts,
meanings of predicative expressions, from the legislation of psychology and relo-
cating them under that of mathematics. This started a spectacular enterprise, fos-
tered at first within formal logic and later moving into the realm of natural
languages, and featuring a series of eminent scholars, from Tarski and Carnap to
Montague and David Lewis.
Partly independently of this, Frege set the agenda for a long-term discussion of
the question of what a natural language is, his own contribution being that lan-
guage should be seen not as a matter of subjective psychology, but rather as a real-
ity objective in the sense in which mathematics is objective. His formal semantics,
then, was just an expression of this conception of language. And many theoreticians
now take it for granted that formal semantics is inseparably connected with a
Platonist conception of language.
Moreover, the more recent champions of formal semantics, Montague and
David Lewis, took for granted that natural language is nothing else than a structure
of the very kind envisaged by the theories of formal logicians. While Montague

99
claims quite plainly that there is no substantial difference between formal and nat-
ural languages (“I reject the contention,” he says [1974, 188], “that an important
theoretical difference exists between formal and natural languages”), Lewis states
that it is fully correct to say that a linguistic community entertains a language in the
form of a mathematical structure (“we can say,” states Lewis [1975, 166], “that a given
language L [a function from a string of sounds or of marks to sets of possible
worlds] is used by or is a (or the) language of a given population P”).
However, in the last third of the twentieth century, many (direct and indirect)
participants of the discussion of the question what is language? underwent a dis-
tinctively pragmatic turn. Philosophers of language started to revive American prag-
matism, or the ideas of the later Wittgenstein, or Austinian speech act theory, and
they began to picture language primarily as an activity—or as a toolbox for carry-
ing out an activity—rather than as a mathematical structure. In response to the
Montague-Lewisian conception of language, Donald Davidson (1986) wrote that
“there is no such a thing as language, not if a language is anything like what many
philosophers and linguists have supposed.”
Robert Brandom’s philosophy of language undoubtedly fits with the latter
stream. For Brandom, language is first and foremost an activity, albeit a rule-governed
one. From this it might be expected that his attitude toward formal semantics
would be very reserved. But surprisingly, this does not seem to be the case—what
Brandom urges in the fifth of his Locke Lectures is strongly evocative of a formal
semantics. Is this viable? Does not the approach to semantics entertained by
Montague (1974) or Lewis (1972) presuppose a kind of ‘formal metaphysics’ which
is utterly at odds with pragmatism? Can a Brandomian-style inferentialist really
embrace formal semantics?
To sort things out, I will concentrate, in this paper, on the following three ques-
tions:

(1) Is a formal approach to semantics compatible with pragmatism and


inferentialism?
(2) If so, what kind of formal semantics is useful from the inferentialist per-
spective and of what good can it be from this perspective?; and
(3) Is Brandom’s incompatibility semantics a suitable kind and does it bring
us some new insights into the phenomena of meaning and language?

My answer to question (1) is, as expected, positive. For some time now, I have
been urging a reconciliation of pragmatism and formal semantics—see esp. Peregrin
(2001; 2004). I am unsure about how far my reasons for giving this answer overlap
with the reasons that made Brandom produce his incompatibility semantics; but I
think that writing this paper might be a good way to find out.
I personally think that it is vital to appreciate the distinction between claiming
that language is a mapping of expressions on objects and saying that it is useful to

100
model it as a mapping of expressions on objects. The former claim commits us to a
radically limiting statement about what meaning is and how an expression can
come to have it. Elsewhere (ibid., §8.4) I compared this to the relationship between
the Bohr model of the internal workings of an atom (the tiny sun-nucleus orbited
by even tinier planet-electrons) and its actual innards. It is clear that the atom’s
innards are not identical to what the Bohr model shows; but neverthless it is useful
to model it in this way. Why? Because, first, what the inside is precisely like anyway
defies description by the usual means of our language; and, second, it renders the
inside more comprehensible to us (at the cost of oversimplifications). In short, it
provides for what Wittgenstein (1953, §122) called, in the case of language, perspic-
uous representation [übersichtliche Darstellung]; or what Haugeland (1978) called
structural explanation.
An important point is that whereas saying that a language is a set of labels
stuck on objects (a claim that is often ascribed to anybody doing formal semantics)
involves saying that the way an expression acquires meaning is by becoming a label,
saying that language can be modeled as such a set of labels involves no such com-
mitment. Especially, the latter is fully compatible with seeing meanings as roles:
though an expression would acquire such a role by means of being engaged within
a praxis, there is no reason not to envisage the roles by means of a model in which
the expressions are put side by side with their roles, which are encapsulated into
some kind of formal objects.

II. WHAT DOES IT TAKE TO BE A MODEL OF LANGUAGE?

Given that there is a point in doing a logico-mathematical model of language, we


should ask ourselves what kind of model we want, what such a model is to show us,
and how are we to assess its adequacy (and hence ‘reasonableness’). This brings us
to the above question (2). To answer it, let us consider the standard format of the
definition of the formal languages of logic, which are to serve us as our models.
Such a language usually has three parts (one of the latter two might be miss-
ing). First, there is syntax (proper), the delimitation of well-formed formulas (and
more generally well-formed expressions). Second, there is an axiomatics (logical
syntax, in Carnap’s phrase), the delimitation of the relation of inference among wffs
and/or the set of theorems. Third, there is a (formal) semantics, the delimitations of
acceptable assignments of denotations to expressions, which ground the relation of
consequence and/or the set of tautologies.3
If we want to use such a language as a model of an actual language, then which
of its features should be aligned with corresponding features of the natural lan-
guage to be modeled? It is clear and uncontroversial that the first component, the
delimitation of wffs, should somehow correspond to the grammar of the language
to be modeled. Of course, the correspondence may be very indirect (we may want

101
to have the grammar of the formal language significantly simpler than the gram-
mar of the natural one)—the important thing is that we should be able to say
which wff (if any) corresponds to a given sentence and which sentence corresponds
to which wff.
Many logicians would say that the point of contact between the real language
and the model is the semantics—that the denotations of the expressions of the for-
mal language assigned to expressions should correspond to, or model, or capture, or
simply be the meanings of the corresponding expressions of natural language.
Thus, classical logic maps ‘∧’ on the usual truth table, the story goes, because the
corresponding truth function is the meaning of English ‘and’, which gets regi-
mented as ‘∧’. This amounts to saying that the denotation-assignment function of
a formal model should be seen as reflecting the real association of meanings with
expressions.
Continuing along these lines, then, it would appear that there is no direct rela-
tionship between the inferential component of the formal language and natural
language—there being nothing in natural language that would directly correspond
to inference. Inference, from this viewpoint, is merely the logicians’ tool of getting
a better grip on consequence, which itself is a matter of semantics (and can be
defined via denotation-assignments). Inference is thus the theoretician’s attempt to
approximate consequence by finite and hence convenient means.
Needless to say, were this to be the sole way of aligning the formal model with
a real language, then the inferentialist should best refrain from taking formal models
seriously—for he does not believe that there is a meaning, independent of infer-
ence, that can be thus compared with the semantic interpretants of the formal lan-
guage. But fortunately it is not the only way; there exists an alternative: we can say
that where the model connects with reality is not semantics, but inference. We may
then say that the model is adequate if the stipulated inferential rules explicate the
inferential rules constitutive of the real language. And we may say that semantics is
simply our way of seeing the inferential roles as distributed among individual
expressions—as the expressions being mapped on their contributions to the infer-
ential patterns they support.
This, of course, presupposes that inference, besides being a tool employed by
the logician as an expedient within her pursuit of consequence, is also something
already present within natural language. And this is precisely the basic inferential-
ist credo: that language is generally constituted by rules and that meaning, in par-
ticular, is constituted by inferential rules—so much so, that what makes an
expression meaningful in the first place is the fact that it is governed by an inferen-
tial pattern.
To illustrate this, let me consider the simple case of classical logical constants.
The denotationalist would say that classical logic can be seen as a model of natural
language insofar as the logical constants denote the very same things as their natu-
ral language counterparts (modulo a reasonable simplification and idealization);
and that the soundness and completeness proof for the classical propositional cal-
culus shows that the ensuing relation of consequence can be conveniently captured

102
in terms of a couple of axioms and the rule MP. The inferentialist, by contrast,
would say that classical logic models natural language insofar as the axiomatics (or
the natural deduction rules) of classical logic capture the inferential patterns actu-
ally governing their counterparts in natural language (modulo a reasonable simpli-
fication and idealization); and that the soundness and completeness proof shows
that the usual semantics can be seen as a way of presenting the inferential structure
as distributed among individual expressions.
This brings us to the problem of the primitives, the ‘unexplained explainers’
the theory of language is to rest upon. The central concept of this enterprise is, no
doubt, meaning. But when we look at early semantic theories within modern logic
(what Tarski called “scientific semantics”), we see this concept fading into the back-
ground: the crucial concept appears to be that of truth. This might be explained by
the fact that these theories never targeted the intuitive concept of meaning (except,
perhaps, its ‘extensional component’); however, an explicit defense of taking the
concept of meaning as secondary to the concept of truth is also available: in its most
elaborated version from Davidson. Thus, in these theories, the concept of truth is
what generally stands as the ‘unexplained explainer’ of semantics—in the sense that
this concept is either not explained at all, or is reduced to some nonsemantic con-
cepts and serves as the basis for the explanation of other semantic notions. In this
order of explanation, consequence is construed as truth-preservation and inference
as the best possible approximation to truth-preservation in terms of rules (besides
this, there is the purely formalistic concept of inference, where it is stripped down
to transformability according to an arbitrarily chosen set of rules, but this has little
to do with semantics).
From the inferentialist perspective, however, the situation looks very different.
From the perspective Brandom is advocating, the core of any linguistic practices is
the game of giving and asking for reasons. And hence we should ask what conditions
must be fulfilled by a language, understood as a collection of expressions, in order
for it to be able to facilitate this game?
It is clear that to ask for reasons, I need a way of challenging an utterance; i.e., I
need some statements to count as a challenge to other statements. Thus, language
must provide for statements that are incompatible with other statements. Contrast-
ingly, to give reasons, I need a statement from which the statement constituting the
utterance I strive to substantiate follows; and thus language must also provide for
statements that are correctly inferable from other statements. Hence it seems that
the key concepts upon which a Brandomian style theory of semantics must rest,
and which it must take for granted, are those of incompatibility and inference.

III. INFERENTIAL SEMANTICS

A model based on the concept of inference consists primarily of a formal language


L and a relation Í– between finite sequences of its statements and its statements. The

103
structure of the model language approximates the grammatical structure of the
natural language in question (the potentially infinite number of sentences of L
must be certainly ‘finitely generated’, by means of a finite lexicon and a finite set of
rules for producing expressions from expressions), and the relation approximates
the inferential structure of the language (which the inferentialist sees as existing and
as identifiable): hence A1, . . . , An Í– A if the counterpart of A is correctly inferable
from the counterparts of A1, . . . , An.
In a sense, the job of the inferentialist modeler is done once he produces this
structure to his satisfaction. Of course, Í– must also be ‘finitely generated’, in this
case resting on the grammatical structure of L, for there is no way of presenting an
infinite relation other than to present some basic instances together with some
recursive ways of extending them; and there is no nontrivial way of specifying such
recursive ways save as resting on the recursive structure constitutive of its linguis-
tic carrier, i.e. on the rules of grammar. Note, however, that there is no reason to
assume that each expression will have an inferential pattern independent of those
of other expressions (like ‘∧’ does), nor that the patterns for more complex expres-
sions will always be derivable from those for their parts. The only restriction is the
‘finite generatedness’.
However, a traditionally minded semanticist will probably not accept this kind
of model as satisfactory. Where is truth?, where is consequence?, and indeed, where is
meaning?, may well be asked. And, why call a purely syntactical structure like the
above one explication of language? Although I am convinced that to call this kind of
structure “purely syntactical” is misguided, I think these questions do require atten-
tion; so it is reasonable for the inferentialist to develop the structure further.
First, can an inferentialist make sense of consequence as distinct from infer-
ence? I think that to some extent, she can. A paradigmatic example of such an
enterprise can be found in Carnap’s (1934) Logical Syntax of Language. What
Carnap undertakes in the book is a purely inferentialist enterprise; but surpris-
ingly, what he declares to be his ultimate aim is not to delimit the relation of infer-
ence (Ableitbarkeit), but the relation of consequence (Folgerung). And indeed he
tries to define consequence—as distinct from inference—using his purely infer-
entialist means.
Now, having consequence, we can proceed to truth. For a traditional semanti-
cist, consequence is in fact truth-preservation; and I see no reason for an inferen-
tialist to disagree. The only thing he cannot agree to is to construe this as saying that
the concept of consequence is reducible to that of truth. But if he wants to have con-
sequence (or inference, if he thinks he can substantiate the denial of a gap between
the two) as a concept more basic than truth, he can try to use the above relation-
ship between truth and consequence as means of reducing the former concept to
the latter one, rather than vice versa. He can say that truth is what is preserved by
consequence.
We are now approaching the last, and the most important, concept of the
semantic battery—that of meaning. From the inferentialist viewpoint, meaning is

104
usually said to be something like an inferential role, but as the concept of role is
somewhat vague, we need an explication. I think that we can implicitly delimit the
concept of inferential role by the following sets of constraints:

(a) Inferential roles uniquely determine inference; that is the inferential roles
of A1, . . . , An and A uniquely determine whether A1, . . . , An Í– A or not.
(b)The inferential role of a complex expression is uniquely determined by the
inferential roles of their parts (this follows from the fact that inferential
roles are identified with meanings and that the concept of meaning
involves compositionality—as I have argued for elsewhere).
(c) Inferential roles are a matter of nothing else than (a) and (b); that is, the
inferential role is ‘the simplest thing’ that does justice to (a) and (b).

Do (a)–(c) answer the question of what an inferential role is? If we compare the
concept of inferential role to that of number, it is not an answer on a par with von
Neuman’s saying that a number is the set of its predecessors (with zero being the
empty set), but it is an answer in the sense of Quine’s saying that a number is what is
specified by the axioms of arithmetic. And given that on closer inspection von
Neuman’s answer turns out to be not really an answer to the question what is num-
ber?, but rather one of many possible explications of the concept, I think that (a)–(c)
do offer us a way of answering the question about the nature of inferential roles.
Of course, just as in the case of numbers, it makes sense to go on explicating
inferential roles. How is this to be achieved? The most straightforward way is to
provide a mapping of expressions on some kind of entities that would do justice to
(a)–(c). Before doing this, we should probably prove that such a mapping exists at
all—but this is easy. It is clear that there is a relation of intersubstitutivity salva
inferentiae—let us write E ª E¢ iff for every inference A1, . . . , An Í– A iff A1[E/E¢], . . . ,
An[E/E¢] Í– A[E/E¢] (where X[E/E¢] is the result of replacing zero or more occur-
rences of E by E¢ in X). Now it is not difficult to see that mapping E on the class [E]ª
of all expressions that are equivalent with E does justice to (a)–(c).
Hence we can use the equivalence classes as explications of the inferential roles;
but this does not lead to an illuminating explication. Let us call the downstream
inferential potential of a sentence the set of all sentences from which the sentence
follows and let us call the upstream inferential potential the set of all sentences which
follow from it together with various collateral premises:
A¨ = {S | S Í– A}
AÆ = {<S1, S2, S> | S1,A,S2 Í– S}

The inferential potential of A then can be seen as composed of these two parts.
AIP = {A¨,AÆ}

An inferential role of an expression E can be now taken as a function mapping sen-


tence contexts (appropriate for E) on inferential potentials, where a sentence context

105
of a type appropriate for E is a sentence with one or more occurrences of an expres-
sion of the grammatical category of E replaced with three dots. (Thus, “. . . is an
inferentialist” is a sentence context appropriate for “Bob,” and “Bob . . . an inferen-
tialist” is one appropriate for “is”.) The inferential role of E maps a sentence context
on the inferential potential of the sentence which results form the substitution of E
for the three dots of the context:
EIR (SC( . . . )) = SC(E)IP

Now the inferentialist’s claim is that two expressions have the same inferential roles
(in this sense) iff they are intersubstitutive salva inferantiae.
The obvious drawback is that, explicated in this way, the inferential roles do
not comply with (a)–(c) above; indeed the inferential roles of any two different
expressions come out as different. This is because the explicates are made up of lin-
guistic expressions, distinguishable, as they are, one from another, whereas we now
want to avoid distinguishing between inferentially equivalent ones. This poses a
problem, though only one of a purely technical nature. Just as with all explications,
to find the truly handy solution may take some ingenuity (for we seem unable to
say in advance what exactly is meant by handy), but there are, I think, no deep
philosophical problems involved.

IV. INFERENCE AND INCOMPATIBILITY

We claimed that the other basic concept an inferentialist could use as an ‘unex-
plained explainer’ is incompatibility (or incoherence). Can we develop our inferen-
tial model of language to accommodate it? If we were to accept that incompatibility
is reducible to inference, then there is one obvious way, a very traditional one: to say
that two sets of sentences are incompatible if every sentence is inferable from their
union.
In this way, incompatibility becomes dependent on the expressive richness of
the language in question. Something, it would seem, might come out as incoherent
only because the language lacks, by pure chance, all sentences that would not be
inferable from it. And though I think that in the case of a natural language we are
entitled to presuppose some kind of ‘expressive saturatedness’, this feature of the
reduction of incompatibility to inference makes it a little bit suspicious.
There is also a way of making room for the concept of incompatibility by gen-
eralizing the concept of inference: by saying that inference is not a relation between
finite sequences of statements and statements, but rather that it is a relation between
finite sequences of statements and finite sequences of statements of length zero or
one. The extension is natural in that the resulting relation would comply with the
Gentzenian structural rules and can be seen as halfway to the multi-conclusion ver-
sion of inference as we know it from the sequent calculus.

106
The way chosen by Brandom consists in taking incompatibility as the ‘unex-
plained explainer’, explicating inference as compatibility-preservation: claiming A1,
. . . , An Í– A is construed as claiming that whatever is incompatible with A is incom-
patible with A1, . . . , An, i.e. that whatever is compatible with the former is compat-
ible with the latter. (This indicates that there is a sense in which compatibility
assumes the role of truth in his system; we will return to this later.)
Is there a difference between basing the formal semantics on inference and bas-
ing it on incompatibility? We have seen that the concepts may be thought of as
interdefinable; but the interdefinability is not unproblematic. Starting from infer-
ence we can take incompatibility in stride, we saw, only with some provisos. Con-
versely, basing inference on incompatibility might seem more straightforward; but
it contains a notable snag. The reduction of inference to incompatibility contains
quantification over all sentences, hence the inference relation based on a finitary
incompatibility relation might not be itself finitary.
This may be more important than first meets the eye. Recall that when we con-
sidered the general structure of a formal language we distinguished between
axiomatics (proof theory) and semantics (model theory) and we held for impor-
tant which one of them is the ‘point of contact’ between the model and a real lan-
guage. Now what is the crucial difference between these two components? After all,
both of them aim at a specification of a relation among sentences (inference resp.
consequence), or a set of sentences (theorems resp. tautologies) and we know that
at least in some cases (classical propositional logic) even their outcome is the very
same.
The basic difference, I think, is that while the proof-theoretic enterprise is prin-
cipally a matter of finite generation, there is no such restriction for the model-
theoretic specification. As a result, this specification may happen to be finitary (as
in the case of classical propositional logic), but in general need not. Now the point
is that if we have a finitely generated relation of incompatibility and use it to define
the relation of inference, the resulting definition may no longer be finitary. This
means that a proof-theoretical delimitation of incompatibility may yield a defini-
tion of inference that is, by its nature, more model-theoretic than proof-theoretic.
In this sense, starting from incompatibility may lead us, in a way that is easy to miss,
into the realm of formal semantics.
It is also worth noticing that this definition of inference leads us, almost
inescapably, into the realm of classical logic. Defining inference as containment of
a set associated with the conclusion in the intersection of sets associated with the
premises amounts (independently of whether the sets are sets of incompatible sets
of sentences or of anything else) to giving the resulting logic a Boolean semantics—
i.e. to having it semantically interpreted in a Boolean (rather than, say, Heyting)
algebra. This interpretation, then, leaves us almost no other possibility than to
define the logical operators in the Boolean, i.e. classical way. I find this problematic,
because, as I argued elsewhere (see Peregrin, to appear), I find intuitionist logic
more natural than classical from the inferentialist viewpoint.

107
We have said that having an inferential model of language, the reason for going
on to develop a formal semantics may be the desire to explicate the concept of mean-
ing (which we may or may not submit to); but now we see that we might be driven
into semantics in a sense against our will, by deciding to start from incompatibility
rather than from inference.

V. INCOMPATIBILITY SEMANTICS AND POSSIBLE


WORLD SEMANTICS

Now we are finally homing in on our initial question (3). Does basing a semantic
model on the concept of incompatibility, rather than on inference, or indeed on a
more traditional concept, like that of a possible world, give us any advantage?
Clearly, by granting ourselves the concept of incoherence, and hence coherence, we
grant ourselves all we need to introduce the concept of possible world. It is well
known that a possible world can be seen as a maximal coherent set of sentences; for
the sake of perspicuity, let us distinguish, as is sometimes done, between a possible
world as such, and a possible world story, the corresponding set of sentences (true
w.r.t. the possible world). What we can take to be a possible world story is any set
of sentences that is not incoherent, but any of its proper superclass is incoherent.
The problem with turning incompatibility semantics into a possible world seman-
tics then seems to consist in the fact that we do not have the concept of truth, hence
we cannot talk about a sentence being true w.r.t. a possible world.
This problem, however, is easily overcome. It is enough to realize that a possi-
ble world is complete in the sense that it leaves no possibilities open. Either some-
thing is the case, or it is not; there is no room for something being open in the sense
that something could be coherently added. Hence to be true w.r.t. a possible world
is to be compatible with the corresponding world story, i.e. to be part of the world
story. (This explains the hint we made above—namely, that within the Brandomian
setting, there is a sense in which compatibility assumes the role of truth.) This
means that if we call any maximal coherent set of sentences a possible world story
and if we say that a sentence is true w.r.t. the corresponding possible world iff it is
compatible with the set, we have a recasting of the incompatibility semantics into
a possible world semantics.
Given what Brandom calls an incompatibility frame, i.e. a set (of sentences)
and a subset of its powerset (of incoherent theories) closed to supersets, let us
denote the set of all maximal coherent sets of sentences W and let w range over its
members. Let us write w ||= p for p is true in w (which, as we already know,
amounts to pŒw). Note also that w ||= p iff w |= p, i.e. iff w incompatibility entails
p. (w |= p iff every q incompatible with p is incompatible with w, i.e. iff every q com-
patible with w is compatible with p. i.e. iff every element of w is compatible with p
i.e. iff p is compatible with w i.e. iff p is true in w.) Hence p is true in w iff it is
entailed by w iff it is compatible with w iff it is an element of w.

108
This also gives us all the resources needed to define the simplest (S5-) kind of
necessity: as we have possible worlds, we can say that p is necessary iff it is true w.r.t.
all the possible worlds and that it is possible iff it is true w.r.t. at least one of them.
To introduce any other kind of necessity, we would need, moreover, something
amounting to the Kripkean accessibility relation among worlds; and it is important
to realize that we have nothing of this kind. The point is that the accessibility rela-
tion spells out as a kind of ‘second-level compatibility’: though any two worlds are
incompatible (in the sense that the union of their stories is incoherent) some
worlds are ‘less incompatible than others’ (for example, by sharing the same phys-
ical laws etc.).
In view of this, it seems puzzling that Brandom introduces a notion of neces-
sity that appears to be more complicated than the simplest one. His definition of
the necessity operator is the following:
X∪{Lp} Œ Inc iff X Œ Inc or $Y[X∪Y Œ Inc and Y|≠{p}]

This says, in effect, that p is necessary w.r.t. a world iff p is not entailed by anything
that is compatible with this world:
(*) w ||= Lp iff ∀Y(if w ||= Y, then Y |= p)

But it is not difficult to show that this unperspicuous definition reduces to a very
simple one:
(**) w ||= Lp iff |= p

(To get from the right-hand side of (*) to that of (**) it is enough to instantiate Y
as the empty set. To get, vice versa, from (**) to (*), it is enough to realize that for
every Y, |= p entails Y |= p.)
This means that basing semantics exclusively on the pure, ‘first-level’ concept
of compatibility gives us resources to make sense of merely the most common-or-
garden variety version of necessity. (See Appendix for an illustration of how an
additional level can elicit different logic.) In contrast to this, it seems that the logi-
cal vocabulary of natural language is far richer and diverse. We have various kinds
of necessity words—words explicitating varieties of material inferences from causal
necessity or epistemic necessity to something very close to the S5-kind of logical
necessity.
A response to this might be But this is as it should be—perhaps Brandom’s is the
only purely logical version of necessity—others being contaminated, in such or another
way, by materialness. It might be argued that other versions of necessity correspond
to such things as physical necessity that are surely not purely logical matters. But I
do not think this answer, though partly true, should be fully embraced.
It is obvious that a natural language never comes with a border neatly separat-
ing its logical vocabulary from the rest of its words. (Hence just as it is futile, as
Quine taught us, to look for a seam between the analytic and the synthetic, it is
equally futile to look for one between the logical and the material—for natural lan-
guage is generally seamless.) Moreover, I do not believe that this should be seen as

109
a shortcoming and make us search for the purely logical in ways bypassing natural
language altogether. I think that the reason behind the apparent inextricability of
the logical from the extralogical, just like that of the analytic from the synthetic, is
that these distinctions are of our own making—that they are in the eye of the
beholder rather than in language itself.
To avoid any misunderstanding: there is no doubt that these distinctions are
extremely useful and they help us understand the functioning of natural language.
But we must not forget that they are a matter of the Bohr-atom kind of idealiza-
tion—a kind of idealization that is, beyond doubt, an important (if not indispen-
sable) means of our understanding things like language or the structure of matter,
but an idealization nevertheless. To see the idealized model as an exact copy of the
thing modeled and to believe that whatever is displayed by the former, but does not
seem to be displayed by the latter, must therefore be somehow ‘hidden under its
surface’, is to misunderstand the whole enterprise of model-aided understanding.
What are the implications of all this for the question of whether there is the set
of logical operators and hence the logic? I do not think the answer is in the positive.
Apart from there being some leeway in how we explicitate inference, there is the
question what inference exactly is. Most usually it is taken as a relation between
finite sequences (or finite sets) of statements and statements; but after Gentzen’s
(1934) seminal work it is not unusual to take it alternatively as a relation between
finite sequences (or finite sets) of statements and finite sequences (or finite sets) of
statements; and we may also consider some possibilities between the two extremes.
Besides this, it is usual to assume that inference is to comply with the Gentzenian,
structural rules; but even this assumption is sometimes alleviated (witness the
research done in the field of the so-called substructural logics).
It turns out that when we take inference to be single-conclusion and comply-
ing with the structural rules, the most natural tools for its explicitation are the intu-
itionist operators; whereas when we allow for multiple conclusions, what we gain
will be classical logic (see Peregrin, 2008, for details). And as I am convinced that
there are good reasons to believe that it is the single-conclusion variety that is more
basic, I think that it is intuitionist logic that is the ‘most basic’ logic from the view-
point of an inferentialist.
The reason why I think that the single-conclusion inference is more basic is
that inference may exist (according to the Brandomian picture to which I fully sub-
scribe in this respect) only in the form of the inheritance of certain social statuses,
prototypically commitments. It may exist through the fact that members of a com-
munity hold a person to be committed to something in virtue of her being com-
mitted to something else—even prior to their being able to say this. (This is
essential, for it is only the need for making this explicitly sayable that is the raison
d’être of the existence of logical particles and other expressive resources of natural
language that do make it sayable.) And while it is clear what it could take practically
to hold someone to be committed to doing something (above all, but not only, to
be prepared to sanction him, when he does not do it), and hence also what it is to

110
hold someone to be committed to doing something in virtue of being committed
to doing other things, it is far from so clear what would it mean to hold somebody
committed to doing one of a couple of things—or at least this would not seem to
be a candidate for a perspicuous communal practice underlying, rather than
brought about by, the occurrence of the explicit ‘or’ and other logical particles.

VI. LOGICAL VOCABULARY OF NATURAL LANGUAGE VS.


LOGICAL VOCABULARY OF FORMAL LANGUAGES

I take it that Brandom’s stance is that a force behind the development of natural
language is the tendency of its users to ‘make it explicit’—i.e. to introduce means
which would allow them to formulate, in the form of explicit claims, what was only
implicit in their practices before. Hence if they endorse the inferences such as from
This is a dog to This is an animal, or from Lightning now to Thunder shortly, they are
to be expected to develop something of the kind of if . . . then to be able to say If this
is a dog, then this is an animal or If lightning now, then thunder shortly (and later
perhaps more sophisticated means which would allow them to further articulate
such claims as Every dog is an animal or Lightning is always followed by thunder). I
take it that this is the point of logical vocabulary of a natural language like English.
Now, besides natural languages we have formal languages with their logical
constants, such as the language of Brandom’s incompatibility semantics with its N,
K, and L. What is the relationship between such expressions and the logical vocab-
ulary of natural language? I must say I am not sure what Brandom’s answer is sup-
posed to be.
There seem to be two possibilities. On the one hand, there is the view that what
logic is after is something essentially nonlonguistic, some very abstract structure
which can be identified and mapped out without studying any real language (be it
a structure of a hierarchy of functions over truth values, individuals, and possible
worlds, or a general structure of incompatibility). This is a notion of logic, in
Wittgenstein’s (1956, §I.8) words, as a kind of “ultraphysics.” In this conception, any
possible language is restricted by the structure discovered by logic (for it is only
within the space delimited by the structure that a language can exist), and hence
there is a sense in which philosophy of language is answerable to logic.
On the other hand, we may hold that the primary subject matter of logic is the
logical means (words and constructions) of our real languages (especially those that
seem to be identifiable across languages and which something must possess in
order to qualify as an element of the extension of our term “language”). It is, then,
only derivatively that logic studies certain abstract structures—they are structures
distilled out of natural language similarly to how the Bohr atom model is distilled
out of our empirical knowledge of the inside of the atom. (Of course, this does not
make the study of such structures unimportant—this kind of ‘mathematization’ has

111
become the hallmark of advanced science.) Hence there is a sense in which logic is
answerable to the philosophy of language rather then vice versa.
Given these two options, I vote for the latter; for I am strongly convinced that
the former one is not viable. In my opinion, formal languages of logic are not means
of direct capturing of abstract logical structures, but rather simplified models of nat-
ural language—as I have already pointed out, they provide for Wittgensteinian per-
spicuous representation. They abstract from many features of natural languages,
idealize the languages in many ways, but let us see something as its backbone quite
clearly. Their logical vocabulary explicates that of natural language and though
there may be some feedback, it is not in competition with it.
I do not believe that any artificially created logical symbol, be it the traditional
‘⊃’ or Brandom’s ‘K’, ever gets employed by the speakers of English in such a way
that it can viably compete with if . . . then or and. True, logical theories based on
such symbols have helped us see the workings of our natural logical vocabulary in
a much clearer light, and in some exceptional cases, as in the texts of some eccen-
trically meticulous mathematicians (such as Giuseppe Peano or Gottlob Frege), can
even assume their role; but I do not believe that they can be generally seen as viable
alternatives, or even successors, to the natural logical vocabulary.
Now Brandom does not seem to concur with me on this point. He tries to
extract the logical operators directly from the structural features of incompatibil-
ity. This would indicate that he goes for the option I reject. But the truth, probably,
is that his understanding of his operators coincides with neither of my two options.
If this is true, then I would be eager to learn what his option is.

VII. CONCLUSION

To sum up, I think that formal semantics may be a very illuminating enterprise; and
this is the case even when we subscribe to pragmatism and to the inferentialist con-
strual of the nature language. Of course, if you are a pragmatist and an inferential-
ist, you must understand the enterprise of formal semantics accordingly—not as
an empirical study of the ways in which words hook on things, but rather as a way
of building a ‘structural’ model of language explicative of its semantic (and hence
eventually inferential) structure.
From this viewpoint, excursions into the realm of formal semantics as exem-
plified by Brandom in his Lecture V, seem to me very useful. However, I think we
need a deeper explanation of why they are useful and what they can teach us. I
believe not only that the logical analysis of natural languages without the employ-
ment of logical structures is blind, but also that the study of logical structures with-
out paying due attention to the way they are rooted within natural languages is
empty.

112
APPENDIX

Let me first summarize the basic conceptual machinery of Brandom and Aker
(however, I will deviate from their terminology wherever I find it helpful; and I will
also use the more traditional signs for logical operators). Given a language L
(understood as a set of sentences), a set Inc ⊆ Pow(L) is called an incoherence prop-
erty iff for every finite X,Y⊆L, if X⊆Y and XŒInc, then YŒInc. <L,Inc> will be called
a standard incoherence model. We write I(X) as a shorthand for
{Y | X∪YŒInc}

and we write X |= p as a shorthand for


∀Y: if Y∪{p}ŒInc, then Y∪XŒInc.

Given L is the set of sentences of the modal propositional calculus (based on ∧, ¬


and ■), Inc is, moreover, supposed to fulfill the following postulates
(¬) X∪{¬p}ŒInc iff X |= p.
(∧) X ∪{p∧q}ŒInc iff X∪{p,q}ŒInc.
(■ ) X ∪{■p}ŒInc iff XŒInc or there is an Y such that X∪YœInc and Y|≠p.

POSSIBLE WORLDS AND INTENSIONS AS BASED


ON INCOHERENCE

Given an incoherence property Inc, we define a set PWInc (of ‘possible worlds’) and,
for every class X of statements, its intension ||X||Inc (the class of all those possible
worlds w.r.t. which it is true).
Definition.
PWInc ≡Def. {X | XœInc and there is no YœInc so that X ⊆
/ Y} (possible worlds)
||X||Inc ≡Def. {Y | YŒPWInc and X∪YœInc} (the intension of X)

(The index will be left out wherever no confusion will be likely.) We will use the let-
ters w, w¢, w¢¢, . . . to range over those sets of statements that are possible worlds.
Now we prove some auxiliary facts about possible worlds and intensions.
Lemma.
1. wŒ||X|| iff X⊆w
2. XœInc iff there is a w so that X⊆w, i.e. so that wŒ||X||
3. XŒInc iff there is no w so that X⊆w, i.e. ||X||= ∆
4. ||X∪Y||= ||X||∩||Y||
5. if I(Y) ⊆I(X), then ||X||⊆||Y||
6. if I(Y)⊆/ (X), then ||X||⊆/ ||Y||

113
Proof.

1. According to (1), wŒ||X|| iff X∪wœInc, which, given that nothing that is not a
part of w is not compatible with it, is the case iff X∪w⊆w and henced iff X⊆w.
2., 3. obvious
4. wŒ||X∪Y|| iff X∪Y⊆w, that is iff X⊆w and Y⊆w, and hence iff wŒ||X|| and
wŒ||Y||.
5. Suppose I(Y)⊆I(X). This is the case iff I(X)⊆I(Y), and hence only if
I(X)∩PW⊆I(Y)∩PW, i.e. only if ||X||⊆||Y||.
6. Suppose that I(Y)⊆/ I(X), i.e. that there is a Z such that ZŒI(X) and ZœI(Y). In
other words, Z∪XŒInc and Z∪YœInc, which, according to 2. and 3., is the case
iff there is a w such that wŒ||Z∪X|| and there is no w such that wŒ||Z∪Y||. This
implies that there is a w such that wŒ||Z∪X|| and wœ||Z∪Y||. This in turn
implies that wŒ||Z|| and wŒ||X|| and (wœ||Z|| or wœ||Y||) and hence that
wŒ||X|| and wœ||Y||.

Having proved this, we can prove that the assignment of intensions is isomorphic
to the assignment of incompatibility sets; hence that the possible worlds semantics
based on PW and || . . . || is a faithful representation of the underlying incompati-
bility semantics.
Theorem.
I(X) = I(Y) iff ||X|| = ||Y||
Proof. The direct implication is trivial, as ||X||⊆I(X). The indirect one follows
from the last two clauses of the previous lemma. This shows that as in the case of
ordinary modal logics, the semantic value of a sentence can be identified with
the set of all those possible worlds in which it is true.

INCOMPATIBILITY SEMANTICS AND THE MODAL LOGIC S5

Let us call the logic to which this incompatibility semantics gives rise (i.e. the result-
ing relation of consequence) standard incompatibility logic (SIL), and let us inves-
tigate its relationship to the standard systems of modal logic. (B&A have already
proved that SIL is equivalent to S5, but I present a proof which is more suitable for
what will follow.)
Lemma. Given (■), it is the case that
X∪{■ p}ŒInc iff XŒInc or |≠p.
Proof. It is obviously enough to prove that provided XœInc, |≠p iff there exists a
Y such that X∪YœInc and Y|≠p. The indirect implication follows from instanti-
ating Y as the empty set; the direct one follows from the obvious fact that if |≠p,
then Y|≠p for any Y.

Theorem. wŒ||■p|| iff ∀w wŒ||p||; hence


||■p|| = {w | for every v: if wRv, then vŒ||p||), where R = PW×PW
Proof. According to the theorem just proved, w∪{■p}ŒInc iff wŒInc or |≠p,

114
hence iff |≠p. Moreover, w∪{■p}ŒInc iff wœ||■p||; and |≠p iff p is incompatible
only with incoherent sets, which means that it is compatible with every possible
world and hence that wŒ||p|| for every w.

Corollary. Let <L,Inc> be a standard incompatibility frame.incoherence model.


Then there exists an equivalent Kripkean model <W,R,V> with R = W×W, i.e.
there exists a (one-one) mapping i of PWInc on W such that for every pŒL: pŒw
iff V(p,i(w)) = 1.
Proof. In view of the theorem we can simply take W = PWInc, i as the identity
mapping, R = PWInc×PWInc and V(p,i(w)) = 1 iff pŒw.

Corollary. S5-validity entails SIL-validity.

Theorem. Let <W,R,V> be a Kripkean model with R = W×W. (We will use the
variables v, v¢, v¢¢, . . . to range over elements of W.) Define
Inc = {X | there is no v such that V(p,v) = 1 for every pŒX}

Then Inc is an incoherence property and there exists a (not necessarily one-one)
mapping i of W on PW such that V(p,v) = 1 iff pŒi(v).

Proof. Let us call X impossible iff there is no vŒW such that V(p,v) = 1 for every
pŒX (thus, Inc is the set of all impossible sets). Let us write V(X,v) = 1 a short-
hand for V(p,v) = 1 for every pŒX .That Inc is an incoherence property is obvi-
ous. It is also obvious that for every v there will be an element i(v) = {p | V(p,v)
= 1} of PW such that V(p,v) = 1 iff pŒi(v). Thus the only thing that must be
shown is that Inc fulfills (¬), (∧) and (■ ). If we rewrite the postulates according
to our definition of Inc, we get

(∧*) X∪{p∧q} is impossible iff X∪{p,q} is impossible


(¬*) X∪{¬p} is impossible iff
∀Y(if Y∪{p} is impossible, then Y∪X is impossible)
(■ *) X∪{■p} is impossible iff X is impossible or
$Y(Y∪{p} is impossible, whereas Y is possible)
Whereas (∧*) is obvious, (¬*) and (■*) are slightly more involved.

Let us start with (¬*): To prove the direct implication, assume that Y∪{p} is
impossible, i.e that V(p,v) = 1 for no v such that V(Y,v) = 1. It follows that
V(¬p,v) = 1 for every v such that V(Y,v) = 1, i.e. that V(X,v) = 1 for no v such
that V(Y,v) = 1, and hence that V(Y∪X,v) = 1 for no v. To prove the indirect one
it is enough to instantiate Y as ¬p.

As for (■*): To prove the direct implication, observe that if V(X,v) = 1 for some
v, then V(■p,v) = 0, and hence that there must be a v¢ such that V(p,v¢) = 0. But
then, i(v¢)∪{p} is impossible, whereas i(v¢), being the set of all sentences true in
v¢, is possible. To prove the indirect implication, observe that if V(X,v) = 1 for
some v, there must be a v¢ such that V(p,v¢) = 0; hence V(■ p,v¢¢) = 0 for every v¢¢;
and hence X∪{■p} is impossible.

Corollary. SIL-validity entails S5-validity (and hence they coincide).

115
AN EXTENDED INCOMPATIBILITY SEMANTICS
AND THE MODAL LOGIC B

Now we indicate how the addition of one more level of incompatibility may lead to
a logic different from S5.
Definition. An extended incoherence model is an ordered triple <L,Inc,QInc>,
where <L,Inc> is a standard incoherence model and QInc⊆Pow(Pow(L)) such
that:
(a) if UŒQInc and U⊆U¢, then U¢ŒQInc
(b) if ∪UœInc, then UœQInc
(c) {Xi}iŒIœQInc iff there are possible worlds {wi}iŒI so that Xi⊆wi for every
iŒI and {wi}iŒIœQInc
If {Xi}iŒIœQInc, then the sets {Xi}iŒI are called quasicompatible.

(The intuitive sense behind this definition is the following: (a) simply states that
QInc is an incoherence property; (b) states that it is a weakening of Inc in the sense
that every two compatible sets are quasicompatible; and (c) states that to be quasi-
compatible is to be parts of quasicompatible possible worlds. It is obviously this last
clause that makes QInc suitable for emulating a Kripkean accessibility relation.)
Definition. Let <L,Inc,QInc> be an extended incompatibility model. We say that
a possible world w¢ from PWInc is accessible from the possible world w¢, wRIncw¢,
iff {w, w¢}œQInc.

Lemma. The accessibility relation is reflexive and symmetric; i.e. every world is
accessible from itself and every world is accessible from every world that is acces-
sible from it.
Proof. Obvious.

Definition. We replace the above (■ ) by


(■ ¢) X∪■ pœInc iff for every Y: if <X∪■p,Y>œQInc, then Y∪pœInc.

The modal logic based on this kind of incompatibility semantics will be called
extended incompatibility logic (EIL).
Theorem. wŒ||■p|| iff for every w¢ such that wRw¢ it is the case that w¢Œ||p||
Proof. It is clear that
w∪■pœInc iff for every Y: if <w∪■p,Y>œQInc, then Y∪pœInc,
and as w∪■pœInc iff w∪■p = w, that
w∪■pœInc iff for every Y: if {w,Y}œQInc, then Y∪pœInc.
This can be rewritten as
wŒ||■p|| iff for every Y such that {w,Y}œQInc there is a w¢ such that w¢œ||Y∪p||.

Hence what we must prove is that


(*) for every w¢ such that wRw¢ it is the case that w¢Œ||p||

is equivalent to
(**) for every Y such that {w,Y}œQInc, there is a w¢¢ such that w¢¢Œ||Y∪p||.

116
That (*) follows from (**) is obvious: restricting the Y’s of (**) to
possible worlds yields us
for every w¢ such that {w,w¢}œQInc, there is a w¢¢ such that w¢¢Œ||w¢∪p||,
where {w,w¢}œQInc is the same as wRw¢ and w¢¢Œ||w∪p|| iff w¢¢ = w¢ and
w¢¢Œ||p|| (as ||w¢∪p||⊆ ||w¢||∩||p||, w¢¢Œ||w¢∪p|| only if w¢¢Œ||w¢||, which is possi-
ble only if w¢¢ = w¢, and w¢¢Œ||p||).

Let us prove that, conversely, (**) follows from (*). Hence assume (*) and
assume that {w,Y}ŒQInc. According to (c) of the definition of the extended
incoherence model, there is a w¢ such that Y⊆w¢ and {w,w’}œQInc. In other
words, there is a w¢ such that wRw¢ and Y⊆w¢. According to (*), {p}⊆w¢;
hence Y∪{p}⊆w¢, i.e. w¢Œ||Y ∪p||.
Corollary. Let <L,Inc,QInc> be an extended incmpatibility model. Then there
exists an equivalent Kripkean model <W,R,V> with R reflexive and symmetric,
i.e. there exists a mapping i of PWInc on W such that for every pŒL and every
wŒPW it is the case that pŒw iff V(p,i(w)) = 1.
Proof. In view of the previous theorem we can simply take W = PWInc, i as the
identity mapping, R = {<w,w¢> | <w,w¢>œQInc}, and V(p,i(w)) = 1 iff pŒw.

Corollary. S4-validity entails EIS-validity.

Theorem. Let <W,R,V> be a Kripkean model for a language L with R reflexive


and symmetric. Define
Inc = {X | there is no v such that V(X,v) = 1}
QInc = {U | U œInc}∪{<X,Y>| there are v and v¢ such that V(X,v) = 1,V(Y,
v¢ )=1 and vRv¢}

Then <L,Inc,QInc> is an extended incoherence model and there exists a


(not necessarily one-one) mapping i of W on PW such that V(p,v) = 1 iff
pŒi(w).
Proof. That Inc is an incoherence property is obvious. Let us check that
QInc is a quasiincoherence property. The definition says that the sequence
of sets is quasicoherent iff either the union of the sets is coherent, or it
consists of a pair of accessible possible worlds. In both the cases (a) is
obviously fulfilled. Also due to the first part of the definition, (b) is ful-
filled. And also (c) is obviously fulfilled in both cases. So what is left to be
proved is that the postulates for logical connectives are fulfilled, and of
those (∧) and (¬) are the same as before. So we are left with (■¢):
(*) X∪■ pœInc iff for every Y: if <X∪■p,Y>œQInc, then Y∪pœInc.

What we know is that given w and w¢ range over W,


V(■p,w) = 1 iff for every w¢ such that wRw¢ it is the case that V(p,w¢) = 1.

hence
(**) w∪■pœInc iff for every w such that if <w,w¢>œQInc it is the case that
w¢∪pœInc.

117
Thus we must prove (*), given (**). Let us start with the direct implication: assume
X∪■pœInc and <X∪■p,Y>œQInc and head for proving Y∪pœInc. <X∪■p,Y>œQInc
entails that there are w¢ and w¢¢ such that X∪■ p⊆ w, Y⊆ w¢ and <w,w¢> œQInc.
However, X∪■p⊆ w entails that w∪■pœInc, and hence, according to (**), that for
every w¢¢ such that <w,w¢¢>œQInc it is the case that w¢¢∪pœInc. Hence, as <w,w¢>
œQInc, it is the case that w¢∪pœInc, and as Y⊆ w¢, that Y∪pœInc. To prove the indi-
rect implication, assume that for every Y, if <X∪■p,Y>œQInc, then Y∪pœInc; and
hence especially that for every w, if <X∪■p,w>œQInc, then w∪pœInc. As
<X∪■p,w>œQInc iff <w¢,w> for some w¢ such that X∪■p⊆ w¢, hence X∪■pœInc.

ACKNOWLEDGMENTS

This paper evolved from the commentary I gave to Brandom’s fifth lecture
during his re-presentation of his 2006 Locke Lectures in April 2007 in
Prague; however, the evolution was so rampant that the paper no longer
quite resembles the original commentary—I hope I have managed to con-
centrate on what is really vital and left out marginal issues.

NOTES

1. In the sense of Carnap and Quine, in which an explicatum is “a substitute, clear and couched in
terms of our liking” filling those functions of the explicandum “that make it worth troubling
about” (Quine, 1960a, 258–59).
2. My urging of a ‘structural’, rather than ‘metaphysical’ reading of formal semantics goes even much
farther back, to Peregrin (1995), a book whose subtitle was Formal Semantics without Formal
Metaphysics.
3. It is also important to realize that such languages may be (more or less) parametric; i.e., they may
be mere language forms rather than real languages. When, for example, we are doing logic, we are
interested only in logical constants, and treat the rest of the vocabulary as parameters—we take
into account all possible denotations of these expressions and thus gain results that are independ-
ent of them. But the boundary between constant, i.e. fixedly interpreted, expressions and those
that are parametric, i.e. admit variable interpretations, can be drawn also at other joints. Hence it
is always important to keep an eye on the extent to which the language we are considering is a true
(fully interpreted) language and the extent to which it is only a language scheme.
4. A paradigmatic example of the discrepancy between the grammar of the regimented natural lan-
guage and that of the regimenting language of logic is the case of variables of the languages of
standard logic. Originally, variables were introduced as basically ‘metalinguistic’ tools; and it is
well known that logic can make do wholly without them (Quine 1960b; Peregrin 2000). However,
it has turned out that we reach a great simplification of the grammar of the logical languages
if we treat variables as fully fledged expressions, on a par with constants, and hence if we base
the languages of logic on a kind of grammar deviating from that underlying natural languages.
(Regrettably, this essentially technical move has led some logicians to philosophical conclusions—
such as that logic has discovered that natural languages contain variables in a covert way, etc.)
5. See Peregrin (2007).
6. Cf. McCullagh (2003).

118
7. See Peregrin (2006b).
8. If this question is put into a form precise enough to be answered (which I tried to do in Peregrin
2006a), it turns out that what a relation must fulfill are precisely the well-known Gentzenian
structural rules, stating that for all sentences A, B, and C and all finite sequences X, Y, and Z of
sentences it is the case that
A Í– A
if X,Y Í– A, then X,B,Y Í– A
if X,A,A,Y Í– B, then X,A,Y Í– B
if X,A,B,Y Í– C, then X,B,A,Y Í– C
if X,A,Y Í– B and Z Í– A, then X,Z,Y Í– B
More precisely I assumed that what it takes, on the most general level, for a language to have a
semantics is to classify its truth valuations into acceptable and unacceptable; and I have shown that
an inference relation can be seen as effecting such a classification just in this case. The same holds,
mutatis mutandis, for incompatibility.
9. See Peregrin (2006b).
10. See Peregrin (2005).
11. See Quine (1969, 45).
12. We follow Brandom’s terminology, according to which incoherence is simply self-incompatibility
(and incompatibility is the incoherence of union).
13. Of course, in cases in which the accessibility relation is not symmetric it may be awkward to see
it as expressive of ‘compatibility’, but we leave this aside.
14. In this way we have made a long story short. To tell the long story explicitly, we would have to say
that to make the incompatibility semantics into a regular Kripkean one, we need an accessibility
relation. That is, we need a relation such that Lp is true w.r.t. a possible world w just in case p is
true w.r.t. all worlds accessible from w. The usual way to extract the accessibility relation from the
world stories is to say that the world w¢ will be accessible from the world w iff whatever is neces-
sarily true in the latter, is true in the former, i.e.
wRw¢ iff {p | Lp Œ w}⊆w¢
This definition guarantees that a sentence necessarily true in a possible world will be true in all
worlds accessible from it; but if it be the accessibility relation underlying our current necessities,
we have to show also the converse, namely that a sentence is necessarily true in a possible world if
it is true in all worlds accessible from it. Hence we must show that
If pŒw¢ for every w¢ such that wRw¢, then LpŒw
But given Brandom’s definition, this is easy; and it is also easy to show that R is an equivalence.
15. Pleitz et al. (to appear) have attempted to modify the definition so as to yield a less trivial modal
logic, which they achieved by restricting the range of Y in (*) to singletons. This indeed yields
them a logic weaker than S5. But the restriction they employed seems to me to be so unmotivated
that I cannot see this result as of any other than purely technical interest.
16. In general, I think we should shy away from diagnosing natural language as ‘imperfect’. The mil-
lennia of natural selection responsible for its current shape are more likely to have streamlined
and perfected it—though perfected it in its own way. It is probable that the imperfection diagno-
sis (a matter of course for many classical analytic philosophers, and the driving force behind much
of their philosophy) results from language being expected to fulfill purposes different from its
intrinsic ones—i.e. those for which it was selected.
17. I use the verb explicitate in the Brandomian sense of making explicit, whereas I am reserving the
verb explicate for use in the Carnapo-Quinean sense of devising an exactly delimited substitute
for a vague concept.
18. See Restall (2000).
19. It is important to realize that though this may seem unproblematic in some specific cases (for
example to hold a person A, who accepted a thing from a person B, to be committed to either
giving B another thing in exchange, or returning B the original thing, does not seem to be any
more problematic than, and indeed no different from, holding A to be committed to giving B a

119
thing), multiple-conclusion inference would require a wholly general notion of the disjunctive
combination of commitments. And it is very difficult to imagine what it would take practically to
hold a person to be committed to doing one of an assortment of unrelated things, without the
means for stating the commitment explicitly.
20. I reflect the fact that the Appendix is the common work of the two authors (hereafter B&A).
21. I use the term model instead of B&A’s frame, for, as we will see later, the natural counterparts of
these structures within Kripkean semantics are models rather than frames. (As incompatibility
semantics is built directly from language, there is nothing that would correspond to the concept
of frame of standard Kripkean semantics—i.e. the concept of a space of a possible world with the
relation of accessibility, taken in isolation of any language.) Besides this, I base the definition
directly on the incoherence property (rather than on the incompatibility function derived from
it as B&A do).

REFERENCES

Brandom, R. 2008. Between Saying and Doing: Towards an Analytic Pragmatism. Oxford: Oxford
University Press.
Carnap, R. 1934. Logische Syntax der Sprache. Vienna: Springer.
Davidson, D. 1986. “A Nice Derangement of Epitaphs.” Truth and Interpretation: Perspectives on the
Philosophy of Donald Davidson, ed. E. LePore, 433–46. Oxford: Blackwell.
Gentzen, G. 1934. “Untersuchungen über das logische Schliessen I–II.” Mathematische Zeitschrift 39:
176–210.
Haugeland, J. 1978. “The Nature and Plausibility of Cognitivism.” Behavioral and Brain Sciences 1:
215–60.
Lewis, D. 1972. “General Semantics.” In Semantics of Natural Language, ed. D. Davidson and
G. Harman, 169–218. Dordrecht: Reidel.
Lewis, D. 1975. “Languages and Language.” In Minnesota Studies in the Philosophy of Science VII, ed.
K. Gunderstone. Minneapolis: University of Minnesota.
McCullagh, M. 2003. “Do Inferential Roles Compose?” Dialectica 57: 431–38.
Montague, R. 1974. Formal Philosophy: Selected Papers of R. Montague. New Haven: Yale University
Press.
Peregrin, J. 2000. “Variables in Natural Language: Where Do They Come From?” In Variable-free
Semantics, ed. M. Böttner and W. Thümmel, 46–65. Osnabrück: Secolo.
Peregrin, J. 2001. Meaning and Structure. Aldershot: Ashgate.
Peregrin, J. 2004. “Pragmatism und Semantik.” In Pragmatisch denken, ed. A. Fuhrmann and E. J.
Olsson, 89–108. Frankfurt a M: Ontos.
Peregrin, J. 2005. ‘Is Compositionality an Empirical Matter?” In The Compositionality of Meaning and
Content, ed. M. Werning, E. Machery, and G. Schurz, 135–50. Frankfurt: Ontos.
Peregrin, J. 2006a. “Meaning as an Inferential Role.” Erkenntnis 64: 1–36.
Peregrin, J. 2006b. “Developing Sellars’ Semantic Legacy: Meaning as a Role.” In The Self-Correcting
Enterprise: Essays on Wilfrid Sellars, ed. P. Wolf and M. Lance, 257–74. Amsterdam: Rodopi.
Peregrin, J. 2007. “Consequence & Inference.” V. Kolman (ed.): Miscellanea Logica: Truth and Proof,
ed. V. Kolman, 1–18. Prague: FF UK. (http://www.miscellanea-logica.info).
Peregrin, J. 2008. “What Is the Logic of Inference?” Studia Logica, 88:263–294.
Pleitz, M., and H. von Wulfen. 2008. “Possible Worlds in Terms of Incompatibility: An Alternative to
Brandom’s Analysis of Necessity.” The Logica Yearbook 2007, ed. M. Peliš, 119–137. Prague:
Filosofia.
Quine, W. V. O. 1960a. Word and Object. Cambridge, Mass.: MIT Press.
Quine, W. V. O. 1960b. “Variables Explained Away.” Proceedings of the American Philosophical Society
104: 343–47.
Quine, W. V. O.. 1969. Ontological Relativity and Other Essays. New York: Columbia University Press.

120
Restall, G. 2000. Introduction to Substructural Logics. London: Routledge.
Wittgenstein, L. 1956. Bemerkungen über die Grundlagen der Mathematik. English translation
Remarks on the Foundations of Mathematics. Oxford: Blackwell.
Wittgenstein, L. 1953. Philosophische Untersuchungen. English translation Philosophical
Investigations. Oxford: Blackwell.

121
PHILOSOPHICAL TOPICS
VOL. 36, NO. 2, FALL 2008

Infinite Explanation

Sebastian Rödl
University of Basle

Brandom proposes, and this is the fourth of a sequence of claims he propounds in


the sixth lecture of his John Locke Lectures, that we understand language and
thought as “a development of and a special case of the sort of basic practical inten-
tionality exhibited by the kind of feedback-governed transactions mentioned in the
first three theses” (LL 6, 5). The transactions are tote cycles—test-operate-test-
exit—that link dispositions to respond in a certain way to a certain kind of stimu-
lus in such a way that the effect of a given response is the next stimulus. “A
creature’s practical engagement with its world exhibits practical intentionality inso-
far as it is feedback-governed, that is, specifiable [ . . . ] as having an algorithmic
tote-structure in which each cycle is mediated by its differential responses to the
effects of its own performances” (LL 6, 10). I argue in the first section that behav-
ior does not manifest intentionality of any kind on account of exhibiting this struc-
ture. A fortiori, thought cannot be understood in terms of it. The second section is
on Brandom’s account of objective validity. First, I question his motive. Brandom
wants to explain thought in terms of responsive dispositions because he thinks
comprehending something perfectly is being able to explain it in terms of some-
thing other. In truth, an account of this form is imperfect. Thought is an object of
a higher form of comprehension. With Hegel, I call it infinite (2.1). Brandom
explains objective validity by the unity of normative and modal vocabulary—two
species of logical vocabulary, which can be elaborated from any autonomous
vocabulary. I block the first step of this account by arguing that logical vocabulary
cannot be elaborated from dispositions to respond to stimuli. The reason for this

123
has a broader significance. As he conceives of acts of thought as acts of responsive
dispositions, Brandom fails to register the defining character of thought: its repre-
senting the general (2.2). This suggests that we turn Brandom’s account on its head:
the unity of normative and modal thought does not explain objective validity, but
is explained by it. This is Kant’s view. It yields infinite comprehension (2.3).

I. PRACTICAL INTENTIONALITY

A tote cycle is a sequence of acts of abilities. An ability is a stimulus-response pair:


something that has the ability responds in a certain way to stimuli of a certain kind:
when something does X to it (“stimulus”), it does Y in consequence (“response”).
It does it in consequence of being acted upon, not just after it has been acted upon,
which means that there is a causal nexus of action and affection. In their ordinary
use, the terms “stimulus” and “response” apply only to animals; comprehension of
these terms therefore presupposes a theory of the animal. But Brandom uses them
in the austere way just explained. So he can speak of automata as responding to
stimuli. In the present context, then, we do not deploy the concept of intentional
action or animal movement when we say that something does something, and does
it in consequence of being affected in a certain way. “Do” is a variable verb, and the
schema “something acts on something, which does such-and-such in consequence”
does not restrict its instances to verbs representing animal movement. The flame
heats the water, which boils in consequence. The water, suffering the stimulus of
heat, responds by boiling. We may think the following a prime example of a
response to a stimulus: I clap my hands, and the cat shies away, hiding under the
sofa. The following is just as good an example: I throw the cat into the fire, and she
burns to ashes. The fire acts on a cat, providing a stimulus, to which she responds
by burning.
This shows that the idea of a responsive disposition does not help us under-
stand the intentionality animals exhibit in, I quote, “dealing skillfully with the
world” (LL 6, 3). It is not true that, I quote, “the most basic form of such activity is
a tote cycle of perception, performance, assessment of results of performance, and
further performance” (LL 6, 3). A tote cycle is an “algorithmic elaboration of
responsive dispositions” (LL 6, 4): there is a stimulus (test), and a response (oper-
ate). The effect of the response is the next stimulus (test), to which there is a fur-
ther response (operate), and so on, until there is no further response (exit). We
must not be misled by the word “perception” when Brandom speaks of a tote cycle
of perception, performance, and so on. Usually, “perception” signifies a form of
intentionality, and a power distinctive of the animal, wherefore, again, we under-
stand this word only to the extent that we know what an animal is. But just as above
“do” did not mean animal movement, but any change, here “perception” means any
case of something’s being affected by something. Now, the cat burning in the fire

124
does not exhibit intentionality, not by burning. She does not skillfully deal with the
world as she burns. The following is no account of animal intentionality: some-
thing exhibits intentionality in doing something in consequence of being acted
upon by something else. But nothing changes if we require that there be a sequence:
stimulus, response, further stimulus, further response, and so on. And nothing
changes if later stimuli are previous responses. And nothing changes if at a certain
point there is no longer any response (exit). What is algorithmically elaborated
from responsive dispositions is a responsive disposition. If we do not find inten-
tionality in the disposition to do a certain thing when acted upon in a certain way,
then no amount of algorithmic elaboration changes that.
Tote cycles, as such, have nothing to do with intentionality. Nor would anyone
think they did, did he not imagine, imagining a tote cycle, an animal or a man skill-
fully dealing with the world. (This is how it must be with Brandom, when he
describes building a house as a tote cycle.) So how is it when an animal or a man
skillfully deals with the world? Consider this: I want to kill a man. I see him (stim-
ulus), shoot him in the chest (response), walk up and look, and see he is still mov-
ing (next stimulus, effect of my response to the previous stimulus), shoot again
(response), see that he is moving no more (stimulus, effect of my response to the
previous stimulus), and walk away (exit). Here there is not just a sequence of stim-
ulus and response. And not only is the effect of a given response the next stimulus.
The responses are united as means to an end I represent. They are united as steps I
take to kill the man. This unity is constituted by my representing it, by my deriving
its elements, the steps, from it. Which explains the exit: I see I have reached my end;
the man is dead. My killing the man exhibits a teleological structure that is consti-
tuted by my representation of this very structure. We may call this rational teleol-
ogy. Here the stimuli are not the ultimate causes of my responses (which therefore
it is misleading to call responses). A stimulus—I see the man still moving—lets me
shoot again because I am inside this process: killing the man. Therefore this process
cannot be explained as a sequence of stimuli and responses; on the contrary, it is
the principle of the sequence. While the process exhibits intentionality (in the given
case, it exhibits not only sentience, but sapience; it is an act of applying a concept
in such a way as to realize an instance of it), its intentionality cannot be understood
in terms of the tote cycle. The cycle, its unity, must be understood through it.
A tote cycle may be governed by the practical application of a concept. It may
also be governed by the natural teleology of vital processes. It is sometimes thought
that one has left teleological explanation behind when one can describe the mech-
anisms by which a plant or organ does this or that. But describing a mechanism is
describing a teleological process. Its end is the principle of the unity of its steps;
only through the end, through what it is a mechanism for, is a mechanism appre-
hended. There is a more specific form of natural teleology that is the source of the
concept of perception. It, too, may subsume tote cycles. A chimpanzee fishing for
termites sticks the stick in the mound, gets it out, sees termites, licks them off, and
so on. The unity of this sequence is the teleological process of extracting termites.

125
The process is not explained by the tote cycle; the tote cycle, its unity, is explained
by it. If we want to understand natural teleology in general, the special form of it
signified by the concept of sentience, and the rational teleology signified by the con-
cept of the will, then Brandom cannot help us because he erroneously thinks he has
explained teleology, directedness, intentionality in terms of tote cycles.
The basic concept of Brandom’s account of intentionality is the concept of a
responsive disposition, a disposition to do such-and-such when being acted upon
in such-and-such a way. In lectures two and three, he anticipates that, I quote, “the
stimulus-response model might seem to impose a formal, narrowly behaviorist
strait-jacket on what counts as a primitive ability” (LL 2, 6), which one might think
precludes it from throwing light on language and thought. Brandom thinks he can
lay this worry to rest by observing that the model does not restrict the terms in which
the stimulus and the response are described. For example—these are Brandom’s
examples—the stimulus may be lyrical poetry and the response painting a well-
composed picture. The disposition to respond to the presentation of a poem with
painting a well-composed picture is one that, presumably, only sapient creatures
could exhibit. Brandom envisages other responsive dispositions that “only a sen-
tient creature could exhibit” (LL 3, 17). These passages reveal the fundamental error
that lies at the basis of Brandom’s approach. The idea of a responsive disposition
designates a certain kind of unity of what affects something and what it does. If we
call the unity the form, then the terms of the unity, the stimulus and the response,
are the matter. Brandom thinks that any being—inanimate, living, sentient, sapi-
ent—is the subject of responsive dispositions. There may be differences in the kind
of stimulus to which certain beings can respond, and in the kind of response that
can come from them. And, Brandom thinks, “sentience” and “sapience” signify spe-
cial terms of responsive disposition, that is, a special matter of this kind of unity of
action and affection. He does not consider the possibility that “sentience” and
“sapience” signify different forms, that is, different kinds of unity of what affects
something and what it does. But this is what these terms signify, as the examples
show. In a teleological process, the nexus of action and affection depends on the
principle of the process, which therefore, on its part, cannot result from affection
and be the act of a responsive disposition. I do not shoot whenever I see a man
moving. I do so when killing that man is what I am doing. Then his moving shows
me that I have not yet killed him and need to shoot again. What is true of my
shooting again holds of my killing the man: its ultimate cause will not be a stimu-
lus to which it is the response. Action and affection bear a different kind of unity
when the action is a vital operation, an animal movement, or an intentional action.
Within this form of unity, we can distinguish natural teleology or life, sentient tele-
ology or the animal, rational teleology or thought. These are not ever more com-
plex constellations of responsive dispositions, but different kinds of unity of action
and affection.

126
II. OBJECTIVE VALIDITY

Brandom wants to illuminate objective validity through an inner nexus of norma-


tive and modal vocabulary. I think his idea can be presented as follows. Brandom’s
incompatibility semantics does not associate sentences with something other than
sentences; it represents the meanings of sentences as other sentences, or sets of
them. Hence we may doubt whether sentences, in virtue of being meaningful in the
way represented by this incompatibility semantics, represent an object. A possible
answer is this. Think what it means that sentences are incompatible: it means they
cannot be true together. It means not all of them agree with the object. Incompati-
bility is a relation between sentences, but they bear this relation to each other in
virtue of the relation they bear to the object. Now, this is not Brandom’s answer. He
sets great store by the fact that he can treat the concept of incompatibility as a
semantic primitive and need not explain it in terms of truth. He can do this, he
thinks, because he can give a pragmatic account of it and explain what it is to treat
sentences as incompatible: it is to treat commitment to one as precluding entitle-
ment to the other. If this is to show that the concept of incompatibility is independ-
ent of the concept of truth, then the concepts of commitment and entitlement used
here must themselves be independent of the concept of truth. Commitment and
entitlement are kinds of correctness, and talk of correctness refers to a measure. It
may be thought the relevant measure is agreement with the object. But Brandom
does not want to invoke that at this point. We are to understand the concepts of
being committed and entitled to a claim independently of the notion of agreement
of the claim with the object. It is not clear how then we are to understand them. In
the Locke Lectures, Brandom gives no indication of what he means by “commit-
ment” and “entitlement”. He does so in Making It Explicit, where he explains being
committed in terms of what it is to take someone to be committed, and taking
someone to be committed in terms of the sanctions he suffers in case he does not
honor his commitment. Presumably the lectures presuppose this.
So we are not to rely on the idea of relating to the object in thought or judg-
ment when we describe sentences as incompatible. Rather, Brandom argues, we
receive that idea from this description. Since we did not already invest it, this amounts
to an explanation. We receive it in this way. Consider a vocabulary of sentences
joined by inferential and incompatibility relations. We can say how sentences must
be used to be such a vocabulary; their use must involve suitable consequential rela-
tions of commitments and entitlements (or lack of them). We can show that the
ability to use such a vocabulary can be developed into an ability to use normative
vocabulary and modal vocabulary, which in different ways describe or specify the
very ability from which they are developed. Normative vocabulary expresses the
idea of a subject of commitments and modal vocabulary expresses the idea of an
object to which her commitments are responsible. So with normative vocabulary
and modal vocabulary we possess the idea of a subject representing an object. But

127
modal and normative vocabulary are in a certain sense contained in any vocabu-
lary of sentences standing in inferential and incompatibility relations, that is, in any
vocabulary that, according to a leading assumption of the inquiry, is used to say
anything at all. Moreover, modal and normative vocabulary describe the use of the
vocabulary in which they are contained. If we allow ourselves to be lyrical, we can
say: any vocabulary using which is saying something describes itself as a practice of
subjects representing objects.
The form of my objection to this claim will be clear from the foregoing. It is
not possible to describe the use of language by the formal concept of responsive
disposition. Responsive dispositions do not constitute intentionality of any kind;
the burning cat shows this. Joining dispositions to states and states to dynamic sys-
tems does not change this. We cannot comprehend vital processes through the con-
cept of responsive disposition, much less animal movement or action or language.
These acts are distinguished not by the complexity of the responsive dispositions
from which they spring, but by the different kind of unity of action and affection
that they exhibit. The highest form of unity is this: something affects a subject who,
being so affected, represents it in a judgment. A judgment is, and is understood by
its subject to be, valid of its object. So we may call this sort of unity objective valid-
ity. Objective validity is not understood through responsive dispositions. A judg-
ment is not a response to a stimulus. It bears a different relation to affection.
It follows that no light is thrown on objective validity by the unity of norma-
tive and modal language, if language is described in terms of responsive disposi-
tions. Comprehension of normative and modal language and their unity must
begin from objective validity as a different kind of unity of what affects something
and what it does.
In what follows, I argue that Brandom’s failure to realize that his topic is a
unity of action and affection different from the unity of stimulus and response cor-
rupts his account of logical vocabulary. Then I sketch how, according to Kant,
whom Brandom wishes to follow, the unity of modal and normative thought does
not ground, but is grounded in the objective validity of judgment. But before I do
so, I want to bring out a methodological claim implicit in what I have said.
Brandom often recommends his project by saying that, if it succeeds, it provides a
superior form of understanding. So he tells us to wait and see how far he can get
with his reconstruction of the abilities that constitute language use, to judge the
theory by its fruits and let the proof be in the pudding. This reply does not register
that the objection is also against the method: it entails that the form of understand-
ing the theory would deliver, if it were possible, would be inferior.

2.1 INFINITE COMPREHENSION

Brandom proposes that we explain intentionality as a constellation of responsive


dispositions. The explanation may deploy special concepts to describe the terms of
the dispositions. We have to speak of commitments and entitlements. It is the privi-

128
lege of thinkers to have dispositions whose description requires these terms. But
thinkers do not have something other than responsive dispositions. We do not need
to think differently of the unity of what affects them and what they do. That unity
is the same everywhere: a stimulus calling forth a response. We explain animal
intentionality by the condition that the responsive dispositions be linked so as to
form a tote cycle. For discursive intentionality, we impose further conditions: there
must be dispositions to attribute commitments and entitlements that institute
inferential and incompatibility relations.
Here we comprehend something by representing it as certain elements satisfy-
ing certain conditions; we independently understand the elements and the condi-
tions. Brandom thinks this is the highest mode of comprehension, the gold standard,
he says. This seems wrong to me. Comprehension of this form is, as such, imper-
fect. We explain X as certain elements’ satisfying certain conditions. But now we can
ask why, in a given case, given elements satisfy these conditions and satisfy them all.
The answer is not contained in the nature of the elements; they may or may not sat-
isfy the conditions. So we must turn away from the elements and their unity to
something other to explain why they have come together in the manner required
for X. But then there is a sense in which it remains an accident that they have come
together in this way. For, the other thing to which we now turn, although it, too,
may be explained by something other still, need not have been present in the sense
that the nature of X does not account for its presence. Consider this: a brick
becomes loose in a storm, slides down the roof and falls. It hits someone who passes
by and kills him. We can explain why he was killed: the brick hit him. We can
explain why the brick fell: it became loose in the storm. It remains an accident that
the man was killed. What explains the fall of the brick does not explain its further
causality, its killing the man; it was not necessary that things came together in this
way. Note that it is different when I have loosened the brick, calculating that the
man will pass by at such-and-such a time, will be hit by the brick and be killed.
Explanation of something by something other Hegel calls finite explanation. An
infinite explanation, by contrast, explains the elements and the conditions in virtue
of satisfying which they constitute X by the whole or unity they thus constitute.
Here we need not turn to something other in order to comprehend why given ele-
ments satisfy the conditions and satisfy them all. The nature of X, which is internal
to the elements that constitute it, accounts for that. In this way, the nature of X
accounts for its existence. What is capable of this form of comprehension Hegel
calls an idea. The first kind of idea, he thinks, is a life-form, then there is knowledge,
theoretical and practical, and finally pure reason.
If thought, or language, were such as to be understood in the inferior manner
that Brandom thinks highest, then there would be no complete understanding of
it. But the intellect cannot be such an unworthy object of the intellect. If anything
can be understood perfectly, then the intellect. Indeed, our reflections show that
Hegel is right in thinking that intentionality, natural or rational, is an object of infi-
nite explanation. Intentionality is not explained by responsive dispositions, because

129
a sequence of stimulus and response manifests intentionality only if the whole of
the sequence explains its members, which whole then is a teleological process. An
explanation by responsive dispositions is finite, explaining one thing, the response,
by another, the stimulus, which is not explained in the same act, while explanation
of a manifold by a unity under which it is subsumed is infinite. I said that “life”,
“sentience”, “sapience” signify a unity of what affects something and what it does
that is different from that signified by the concept of responsive disposition. We
now see that this means that life, sentience, sapience are the object of a higher
account than one that satisfies Brandom’s gold standard. They are, in Hegel’s term,
ideas, and an object of infinite comprehension.

2.2 LOGICAL VOCABULARY: INTRODUCTION OF CONDITIONALS

There is no reason to be disappointed that an account of thought and language in


terms of special responsive dispositions is inadequate to its object. For, an account
of this form yields only finite comprehension. Brandom’s wish to give an account
of this form, motivated by his notion that it fits the gold standard of comprehen-
sion, leads him to a false account of logical vocabulary. He explains how logical
vocabulary is contained in any vocabulary by reflecting on the conditional as the
paradigmatic logical symbol. The conditional, he says, can be introduced by algo-
rithmic elaboration from any inferential practice. This seems wrong to me. It is not
possible to introduce conditionals by algorithmic elaboration from a given set of
responsive dispositions. For, asserting that something follows from something is
not responding to a stimulus.
We can focus the issue by considering a parallel. Quine defines an observation
sentence by its involvement in a responsive disposition: an observation sentence is
tied to a set of stimuli that call forth assent to the sentence. He then introduces
observation categoricals reporting the regular coincidence of two stimuli, each
associated with a given observation sentence. If we use “j” and “y” as observation
sentence schemata, then the schema of an observation categorical is “whenever j,
y”. How might someone who can use observation sentences come to use observa-
tion categoricals? If the stimuli to which “j” and “y” are tied regularly come
together, one will associate them, that is, one will, upon suffering the one, expect
the other. One may be said to express this expectation in doing something appro-
priate to the expected stimulus. In the world of our cat, the sound “fleisch” and
salami on the food plate come together regularly. So she expects salami on her food
plate when she hears “fleisch”. She expresses this expectation by hurrying to her
food plate upon hearing “fleisch”. When she hurries to the plate, she manifests
behavior informed by the regular coincidence of “fleisch” and salami. This nexus of
stimuli has sunk into her responsive dispositions. Thus we might say that her
behavior expresses this regular coincidence. But her behavior is not a case of using
an observation categorical. As Quine explains, “[an observation categorical is] a
generalized expression of expectation”.1 Our cat expresses her expectation by hur-
rying to the food plate. This is not a generalized expression. A generalized expres-

130
sion would express that, in general, she expects meat when she hears “fleisch”:
“whenever ‘fleisch’, salami”.
Quine suggests that the step from using observation sentences to using obser-
vation categoricals is small. But this cannot be true. If it were, why does not our cat
take it? The small step is from being able to use the observation sentences to
absorbing a regular nexus of stimuli to which they are tied in one’s responsive dis-
positions. This step our cat takes. It is a different thing to tie the use of a sentence, no
longer to a stimulus, but to a regular nexus of stimuli. An observation categorical is
used, not when this or that stimulus is suffered, but when this stimulus generally
coincides with that. And here, “when” bears a different sense. In “the observation sen-
tence is used when the stimulus is suffered”, it means “at the time when”. But this is
not its meaning in “the observation categorical is used when this stimulus and that
stimulus regularly coincide”. For, there is no time when this happens. It is not some-
thing that happens. In using an observation categorical, one responds to the fact
that the stimuli regularly coincide. This is what Quine must mean when he says that
the categorical expresses that in general one expects this stimulus upon having suf-
fered that one. But here the response is to something general. So the response is no
longer to a stimulus. For, the general is not a stimulus. The general does not affect
the senses.
Let us return to Brandom. He says a conditional describes or specifies the prac-
tice, the system of responsive dispositions, from which it is elaborated. If that is
right, then its use is not an act of such a disposition. Responsive dispositions are
described in statements saying what in general is done and when. And the condi-
tions of the correct use of such a statement cannot be tied to a stimulus. The use of
a conditional is not a response to a stimulus and cannot be introduced by response
substitution. Brandom’s description of the stimulus to which a conditional is to be
tied as a new response is ambiguous, but it must be read as describing something
general, in which case it describes no stimulus.
Brandom writes:
By hypothesis, the system has the ability to respond differentially to the
inference from p to q by accepting or rejecting it. It also must have the
ability to produce tokenings of p and q in the form of assertings. We
assume that since it can produce those assertions, we can teach it also to
produce-assertively tokenings of the new form “if p then q.” What is
required, then, is first that this new sort of response be hooked up
responsively to the previously discriminable stimulus, so that it is
asserted just in those cases where the inference from p to q would have
been responded to as a good one. This is an exercise of the algorithmic
elaborative ability I earlier called “response substitution”: responsively
connecting a previously distinguishable stimulus-kind to an already
elicitable performance-kind. (LL 2, 21)

This may be read as a rule for the use of a specific formula “If p then q”. The rule
would be that the formula is used to greet someone’s moving from the assertion p
to the assertion q, if that inference is a good one. It is then a form of saying “yes” to
a move, a special form of “yes” that can only be used when the move is from p to q.

131
There will be a different rule for, say, the sentences s and r: the formula “If s then r”
is a special “yes” said only of a move from s to r. These rules share a common form.
But one is not conscious of this form in virtue of following rules of this form.
Hence, if Brandom’s description is read in this way, then the formula “If p then q”
does not say that the inference from p to q is good; it does not express conscious-
ness of a rule of the practice. Brandom’s description of the stimulus to which the
use of a conditional is to be a response may also be read as saying how, in general,
formulae of the form “If j then y” are to be used: such a formula is to be used,
when the inference from j to y is good or, equivalently, when one would treat a
move from j to y as valid. Here, there is not just a sameness of rules, a grasp of
which is not internal to an act of following any of them, but the generality is inside
the rule and must be grasped by someone who follows it. In consequence, the
“when” in the formulation of the rule does not mean “at the time when”. There is
no time when an inference is good, and there is no time when one would treat a
move as valid. So the conditions under which it is correct to use a conditional do
not specify a stimulus to which the use of the conditional is a response.
It is not possible to introduce logical vocabulary by algorithmic elaboration
from a system of responsive dispositions. For, the use of a conditional constitutes
consciousness of something general. And while any algorithmic elaboration from
responsive dispositions is a responsive disposition, consciousness of the general is
not an act of responding to a stimulus, for the general is no stimulus. This does not
mean that logical vocabulary is not internal to any autonomous vocabulary. It
means that, if it is, then in no case is an assertion a response to a stimulus. This is
anyway the insight behind the claim that a system of Quinean observation sen-
tences is not a language, an insight Brandom spoils when he thinks it means that a
language needs further responsive dispositions as opposed to something different
altogether.
But what? We can indicate the direction of an account that would not be of the
inferior form Brandom thinks is the highest by returning to our reflections on the
rational teleology of action. The unity of an intentional action is mediated by a rep-
resentation of this unity, from which its steps are derived. So the action does not
just exhibit a unity, but the unity is such as to be known by the acting subject.
Intentional action is self-conscious. An assertion, insofar as it expresses a judgment,
is not an action, but it may share this formal character: rules that govern assertion
are such as to be known by the asserting subject. If these rules describe a practice,
then this practice is such as to be known by her who participates in it. What gen-
eral statements describing the practice represent is such as to be known by bearers
of the practice. The practice is self-conscious. Then it is a small step from follow-
ing the rules of the practice to representing them in speech. For the power to fol-
low them is a power to represent them, which power, however, is not a responsive
disposition, but a self-conscious spontaneous power. Of course this concept, the
concept of a self-conscious spontaneous power, is obscure (for us, not in itself) and
must be developed.2

132
2.3 KANT’S ACCOUNT OF THE UNITY OF MODAL AND NORMATIVE THOUGHT

Brandom associates the thesis that modal and normative thought are internal to
thought as such with Kant. This seems right. But Kant explains in a different man-
ner why thought as such contains modal and normative concepts. He derives their
necessity from the objective validity of judgment. Judgment as such is objectively
valid. It is, and is understood by the judging subject to be, valid of the object. This
defines the general form of a judgment, the unity of representations in a judgment:
the unity of representations in a judgment is the unity in virtue of which the judg-
ment relates to the object in this way. Kant argues further that this unity of repre-
sentations, the objective unity of apperception, contains specific forms of unity,
among them the unity of judgments designated by the concept of causality and
lawful connection. This explains why Hume’s atomism is wrong. It is wrong because
the objective unity of apperception contains a unity of representations that involves
necessary connections. Representations’ being related in this way is the form by
which they relate to an object. Modal concepts are contained in the power of judg-
ment because a judgment is objectively valid and thus exhibits the unity in virtue
of which it is so valid, which unity is the source of modal concepts.
Kant argues further that the objective unity of apperception, the unity of rep-
resentations in virtue of which the judgment they constitute is objectively valid, is
the unity by which they are self-conscious. The subject knows herself to be the sub-
ject of representations not in virtue of any character of their content, but in virtue
of their unity. And this is the same unity as the one by which they relate to an
object. Now, as the unity in virtue of which judgments are objectively valid is such
as to be represented by the subject, that unity may figure in the subject’s thought as
an imperative. The imperative form of the representation manifests an imperfec-
tion in the subject. It comes on the scene when something has gone wrong, as when
the subject judges something to be the case that, according to what the subject takes
herself to know, cannot be the case. In this way, the self-consciousness of judgment
is the source of normative concepts.
The same formal character of judgment is the source of modal concepts and is
the source of normative concepts. Not only are both modal concepts and norma-
tive concepts internal to judgment. They spring from the same source: the objec-
tive unity of apperception.
Brandom’s and Kant’s accounts are finite and infinite respectively. Kant’s cen-
tral term is judgment, Brandom’s assertion; this difference is not important now.
Brandom introduces assertion as the act of a responsive disposition that is part of
a system of such dispositions. (The dispositions are special in that their terms, stim-
uli and responses, require special vocabulary for their description.) Then he argues
that such a system can be algorithmically elaborated so as to include dispositions
whose acts can be recognized as acts of using modal and normative vocabulary.
Finally he maintains that, through this vocabulary, the practice of assertion reveals
itself to be a practice of subjects relating to objects. In this way Brandom thinks he

133
has explained objective validity in terms of a constellation of responsive disposi-
tions. This is an explanation, he thinks, because an account of the concepts that the
explanation deploys, the concept of a responsive disposition and the concepts of
commitment and entitlement that describe the terms of the relevant dispositions,
does not rest on a prior understanding of the idea of objective validity. Formally,
this explanation is finite and therefore imperfect.
Kant begins with the nature of judgment, which is objective validity. This is the
principle. It explains. We understand it when we see what and how it explains. It
explains in this way: the objective validity of judgment requires that judgments
exhibit a certain unity, the objective unity of apperception. This unity is the source
of modal and normative concepts, which thus are revealed to be internal to the
power of judgment. So objective validity is not understood through independent
ideas of the modal and the normative. These are understood through it. Objective
validity is the principle of modal and normative thought and the principle of their
unity. Thus it is a source of infinite comprehension of itself. Even if we think we
cannot follow the account of the intellect that Kant gives in the Critique of Pure
Reason, we ought to aspire to an account of that form.
I want to end with pointing out a consequence of Brandom’s account of the
subject. Brandom says that, as incompatibility of determinations constitutes the
identity of the object so determined, so does incompatibility of commitments con-
stitute the identity of the subject so committed. I believe this is right. But it shows
that Brandom and I are one subject. Brandom does not see this because he does not
follow his theory, but a prior idea he has that he and I are not one. If it is said that
this is red and green, then something must be wrong. But if it is said that this is red
and that is green, then it may be that everything is right. As red and green are said
of different objects, these judgments are not in tension. Brandom suggests the par-
allel is this: If I say it is red and green, then something has gone wrong. But if I say
it is red and Brandom says it is green, then nothing need have gone wrong. As dif-
ferent subjects make the judgments, they are not in tension. But this is not true.
When I say it is red, and Brandom says it is green, then something has gone wrong.
I cannot say, everything is fine, after all it is Brandom who says it is green, not I. I
cannot say this because, in judging, I claim universal validity. I judge for any judger.
So does Brandom. There is a conflict in the judger, whom Brandom and I repre-
sent. If we follow Brandom’s theory beyond prejudice, we must say: that the dis-
agreements between us are disconcerting proves that we are one subject of
judgment.

NOTES

1. W. V. O. Quine, Pursuit of Truth, rev. ed. (Cambridge, Mass.: Harvard University Press, 1992), 24.
2. See my Self-Consciousness (Cambridge, Mass.: Harvard University Press, 2007).

134
PHILOSOPHICAL TOPICS
VOL. 36, NO. 2, FALL 2008

Responses

Robert Brandom
University of Pittsburgh

I. RESPONSE TO JOHN MCDOWELL

Thank you very much for your thoughtful comments, John. I see you as having
raised three questions: an overarching methodological one, an issue specifically
about Wittgenstein, and in closing, a point on the indexicals, so let me address things
in that order.
The methodological question, I think, is indeed inseparable from how we think
about the prospects for analytic philosophy. When I was first writing these lectures,
the first human being who read the early version of them was my Doktorvater
Richard Rorty. His characteristic response was thematic for John’s remarks. He said,
“Why would anyone set out intentionally to prolong the death-agonies of analytic
philosophy by another decade?” which is what he thought I was trying to do here.
That seems to me more or less the attitude John is taking. I want to say two things.
First, I think that there is something to this tradition. I think that there is something
that we, in an extended sense of ‘we’, have been on about during this period in
Anglophone philosophy that is worth recovering from the wreckage of some of the
paradigmatic core programs of it. As John says, I’m not empiricist, I’m not a natu-
ralist, I’m not an artificial intelligence functionalist, either. I don’t think any of those
core programs is actually going to work. He is then asking, reasonably, “Then, why
do you think that there is something worthwhile here?”
One thing to say about this is that there are two things I am trying to point to
a new future for, breathe new life into. One of them is analytic philosophy; the
other is pragmatism. This is supposed to be an analytic pragmatism. I don’t see this

135
as a one-way street. I think John thinks that pragmatism is alive and triumphant,
so its organs are going to be transplanted into the corpse or near-corpse of analytic
philosophy. I actually think pragmatism has been looking somewhat peeked these
days, too, and that it is precisely the quietistic aspect of it that is the problem. “Don’t
think, look.” The advice is to eschew theorizing and offer microdescriptions of
potentially puzzling practices. But actually, we haven’t even been doing all that
much microdescription of practices. In the pragmatist tradition, it is not so clear
what we should be doing: tending our roses and not worrying about it, maybe.
Again, it seems to me that there are really fundamental and important insights in
the pragmatist tradition and actual, important work of understanding to be done
from that perspective, and that what is needed to help us do that is to bring it
together with some of the insights of analytic philosophy.
So what is wrong with these core programs? I do think, as John suggested, it is
a certain way of thinking about the privileging of one vocabulary over another. In
my lecture, I didn’t say anything, for reasons that may become clear, about what
reasons you might have for privileging one vocabulary as a base vocabulary over
another as an empiricist, as a naturalist, as a functionalist about the mind. Mostly,
I’m completely uninterested in that question. For it seems to me that the interest in
understanding the relations between vocabularies and the extent to which it is pos-
sible to say in one vocabulary most, if not all, of what can be said in another vocab-
ulary, perhaps by the use of logic, should not at all be held hostage to the question
of why it is you are interested in those vocabularies, or any collateral views that you
may have about how one of them is more important or privileged in some way
rather than another. It seems to me that there is a kind of intelligibility, a kind of
understanding, that comes from investigating these relations between vocabularies
and that one precisely should not take the second step that I think was characteris-
tic of traditional metaphysics.
I think of traditional metaphysics in general as the attempt to find a vocabu-
lary in which everything can be said. Its favored vocabulary is one that for some
reason it is confident about. That is its base vocabulary. Now when we look at other
vocabularies, it will never turn out that one was completely successful: that you can
say everything else in the favored terms, even by whatever potentially controversial
standards of success are in play. There are always going to be some things that
you’ve got a good story about. One can say in the base vocabulary these things from
the target vocabulary. But then there are always going to be these awkward things
that you can’t say in the favored terms. My view at that point is that one ought to
say that we’ve learned something when we see that, when we see that the expressive
resources of this vocabulary suffice to express this much of what you could say in
this other vocabulary, but not these other things. But the impulse characteristic of
traditional metaphysics was to say, so those things aren’t real, or aren’t intelligible,
or are in some other way deceptive or bad. What is real (or whatever the metaphys-
ical honorific is) is what you can say with the expressive resources that the meta-
physician has for some reason or other argued are particularly perspicuous or

136
whatever; the rest of these, they’re not real. I think that is the step one should not
take. Just learn the detailed lesson about the relative expressive powers of the vocab-
ularies in question and move on.
This relaxed, nonmetaphysical view is connected with my views about logic.
The two traditional questions in the philosophy of logic were the question of
demarcation (What is logic? Or, as I would prefer to put the point: What makes
something a bit of logical vocabulary?) and the question of correctness (What is the
correct logic?). But if you give my expressive answer to the first question—roughly,
that the expressive role distinctive of logical vocabulary is making explicit propri-
eties of inference—then the second question lapses. The beginning of wisdom in
thinking about logic is not to look for the right logic but to observe the different
expressive powers of classical logical connectives, intuitionistic-relevance ones, and
so on. Dummett notices some differences in the expressive power of intuitionism
and classical logic and says, “So classical logic is unintelligible. It manages to confer
no actual content on its things. Only what meets these particular conditions, has
this particular expressive power can be correct.” That is the old bad metaphysical
impulse.
My picture is that understanding is knowing our way around in ultimately an
inferentialist-expressivist network of different vocabularies, practice fragments, and
so on, and practically learning what the relations among them are. So we can just
pick some vocabulary—for all I care, because it is Tuesday, we pick this as a base
vocabulary, and see what that is expressed in other target vocabularies we can find
a way to express in this base vocabulary. And then on Thursday, we pick another
one, and see what we can express in its terms. Eventually we are going to know our
way around better, mastering the relations between these different kinds of vocab-
ularies, which cut across each other in various ways. So I don’t think that what
ought to be understood as essential and taken forward in a progressive way in ana-
lytic philosophy was the extent to which it was a metaphysical program. It would
have horrified the founders of this tradition to have it described this way, but I
think having this invidious, discriminatory notion of privileging just is the old
metaphysical impulse. That is always what was unpleasant and thuggish about the
analytic tradition. But it has never been what was important.1
I do not want to say, with the metaphysical naturalist, for instance, that one
must be able to explain the significance of a signpost in a vocabulary that resolutely
restricts itself to talking about pieces of wood and movements of people in the
vicinity. The view that would say, “Well, if you can’t say everything about it in the
language of pieces of wood and movements of primates in the vicinity, then it’s
unintelligible, or there is something defective about it,” would be making that sec-
ond, inappropriate, metaphysical move. But resisting that move leaves room for it
still to be worth thinking about various naturalistic vocabularies and what expres-
sive relations they stand in to various other kinds of vocabularies. That’s the spirit
in which I am going to be talking about normative vocabulary, about modal vocab-
ulary, about intentional vocabulary, in the lectures to come. So I think that there’s

137
a way to draw a line that puts what is objectionable about the analytic tradition, its
metaphysical impulses, to one side while still keeping what was right and genuinely
contributing to our understanding of things within the analytic tradition.
As to the discussion of Wittgenstein: my beloved colleague John is one of those
who I think draws the quietist, particularist, semantic nihilist conclusions from
Wittgenstein. He would, I think, say, “That’s not a conclusion from a general view
on language because Wittgenstein wouldn’t have a general view of anything. The
point is the particularism, the nihilism, and so on.” Well, maybe it is. I certainly
agree that Wittgenstein does want us to think about the signpost in an environment
in which there are persons and uses of it; I myself found John’s gesture at an
account of the signpost-using-practice a little bit thin. “We have to think about it
in an environment where there are signposts.” Well, yes, maybe in the end that’s
right, but I also think that there are a lot more articulate things that can be said
about it. I think that one methodologically promising—not merely tempting, but
actually promising—way to fill in and articulate that picture is to try and think
about the possibility of pragmatically bootstrapping accounts of that practice of
using signposts that don’t use the concept of a signpost or knowing which way to
go. Now, it is not going to be the language of wood and movements, but can we
help ourselves to a normative language, but not yet a language of signposts and
semantic content? How much of that practice—if we describe what you are doing,
allowing ourselves to use a deontic vocabulary that talks about correctnesses of var-
ious doings but restricts itself from using semantic and intentional vocabulary—
how much of a practice of using signposts can we express? How much of what we
say in the signpost language can we say in a normative but not yet intentional and
semantic vocabulary? I want to ask that question quite generally about discursive
practices. Now, it may well turn out that we can say some things but not everything.
I want to systematically resist the conclusion from that that there is something
spooky or fishy about the stuff we can’t say in that way. But I think we’ll understand
something if we do that. How much can we say in the language of alethic modality
but not the normative? And so on.
I took John to be claiming that we weren’t going to get a bootstrapping account,
that going pragmatic isn’t going to help us get a bootstrapping account of the phe-
nomena Wittgenstein most cares about. Maybe so. Let’s try it and find out. I didn’t
hear, and surely he didn’t intend, a transcendental deduction of the impossibility
of bootstrapping accounts of anything. Since I believe I have some bootstrapping
accounts of some things in my back pocket, I propose to lay them out and see what
we can do along those lines. I may be missing the Wittgensteinian point, but I still
think that there’s that kind of stuff to find out, even if he’s entirely right about the
conclusion we’re going to end up in.
Now, the indexical point. There’s a longer discussion of this that I am only
epitomizing and gesturing at here.2 It’s a very delicate issue. I do believe that one
can say in an entirely nonindexical language—can formulate rules in an entirely

138
nonindexical language—for the correct use of indexicals. That claim I am commit-
ted to, and John clearly denies that. I do not think that nonindexical language is an
autonomous discursive practice and an autonomous vocabulary. I don’t believe it’s
intelligible that there be creatures who speak only a nonindexical language. This is
for reasons that will be familiar to readers of Gareth Evans, for instance. I think his
account of the way in which one needs to be able to map egocentric space and pub-
lic space onto each other in order to so much as have the concept of spatiotempo-
ral continuance is right. So, when John says that what you have to be able to do to
use nonindexical language is—I think this is a quote—“not intelligible entirely
independently of the capacity to use indexicals,” I actually agree with that. I don’t
think you can use nonindexical vocabulary unless you can also use indexical vocab-
ulary. But that’s a separate claim from the question of whether someone who can
use both kinds of vocabulary can formulate rules in the nonindexical language that
specify sufficient conditions for correctly using the indexical vocabulary. It’s a del-
icate issue, and certainly, if one were engaged dialectically with the sort of made-up
skeptic that I mentioned briefly, the one who says, “I don’t even understand index-
ical vocabulary,” the place to start would be with the argument that nonindexical
vocabulary is not autonomous, so you can’t actually be in the position that the
skeptic claims to be in there. But of course, that’s not the point that I’m after. So, I
think the question about the independence of these two types of vocabulary is one
thing, and I am not claiming that nonindexical vocabulary is autonomous.
Nonetheless, someone who, partly in virtue of being able to use indexicals in fact,
actually is the master of nonindexical vocabulary, only needs to use the capacities
he is using to deploy the nonindexical vocabulary to state rules that determine the
correct use of the indexical vocabulary. That’s the claim.

NOTES

1. I respond to McDowell’s (and Rorty’s) worries here in much greater detail in the Afterword to
Between Saying and Doing (Oxford: Oxford University Press, 2008).
2. The full discussion can be found in the Appendix to Lecture 2 in Between Saying and Doing.

II. RESPONSE TO JOHN MACFARLANE

I think I know what to say about two of John’s three thoughtful and challenging
questions. That’s what I’m going to say first. I won’t say much about the middle
one.
The first question is this. I was pretty careful to describe being universally-LX
as the genus of which logical vocabulary is the species. What is being left room for
there, and why? Why not just identify this genus with logical vocabulary? I think

139
you could do that, but doing it that way is more revisionary than doing it the way
I would like to do it. Lots of stuff that people don’t normally think of as logical
vocabulary would come in. What I have in mind is that of the features of discursive
practice that seem plausibly to be really essential, some of them are semantic and
some of them are pragmatic. The semantic ones, I think, generally have to do with
the inferential articulation of things. That is the species that I think ought to be
identified with logical vocabulary. Adding that restriction includes all of the classi-
cal logical vocabulary (including logical vocabulary), and not too much else.
But recall that the original argument was that besides the inferential dimen-
sion in discursive practice, there is always also an assertional dimension. Nothing
that doesn’t involve your capacity to commit yourself assertionally is going to count
as a discursive practice. (This is something I talk about more in the fourth lecture.)
That means that if there is a way of algorithmically elaborating vocabulary that
attributes commitments to people, paradigmatically by using propositional atti-
tude-ascribing locutions, then just from the ability to take people as having made
claims and committed themselves (and though it is no part of what I am going to
do in these lectures, I think there is—that is, I think there is a way of algorithmi-
cally elaborating the implicit ability in practice to distinguish somebody as having
made an assertion and having committed himself into the capacity to say things
like, “John claims that . . .,” “John believes that . . .,” “such and such”) then proposi-
tional attitude-ascribing locutions are going to come out as universally-LX. I think
they are. And I think it is sensible to distinguish between the things that are univer-
sally-LX because they are elaborated from and explicative of, as it were, pragmatic
features having to do with what you’re doing, the commitments that you’re under-
taking, and so on, from the semantic, that have to do with the inferential relations
among things. Since one very traditional way of thinking about logic is as the sci-
ence of inference, one picks up that tradition by restricting the notion of logic to
the universally-LX things that are elaborated from an explicative of inferential rela-
tions that are essential to any discursive practice. That’s what I had in mind there.
For the third question, John offered a weak and a strong reading of the VP-
sufficiency condition as it holds in the meaning-use diagram that’s expressing the
meaning-use analysis or construction—it’s the same thing—of the notion of being
universally-LX, where the question is whether you can make all of the PV-necessary
practice explicit or only part of it. Here, I actually want to say that I think it doesn’t
matter what you say about that, because if you look at the diagram, the PV-necessary
part, the contained rounded-rectangle there—that can contract. We’re not claim-
ing that it’s the whole PV-sufficient practice that’s made explicit; it’s just got to be
some element of it that’s necessary. It has to be sufficient to elaborate the vocabu-
lary from, but other than that it doesn’t matter how big it is. So we can say that it is
only an aspect of the inferential practices that is made fully explicit. The inferential
practices are only one aspect of the PV-sufficient practices. And that is all that is
made fully explicit in this case. But conditionals really are VP-sufficient to make

140
that aspect of the PV-sufficient practices explicit. And that aspect really is a neces-
sary condition of every autonomous practice. So the LX condition is met. I think
of that as just shrinking down to whatever the vocabulary is actually sufficient for,
subject to the condition that enough is left that you can elaborate that practice from
it. In this case, I think one can do that.
The other question was about the status of the algorithmic elaborative abili-
ties. Granting that none of us actually has those abilities, since there are going to be
some psychological restrictions on us, even if it is just the number of states we can
be in—there is presumably some upper bound to that—how exactly do we draw
the boundary around this, and how do we motivate drawing the boundary one
place rather than another? In one sense this is not a critical issue because I am
introducing a notion of one set of practical abilities being in principle sufficient for
another. And I can give a clear notion of what “in principle” means. I suggested that
these are the algorithmic elaborative abilities that really are constitutive of a Turing
machine. (I also pointed out that, though people don’t talk about these as among
the primitive algorithmic elaborative abilities, things like response-substitution and
universal state formation are actually always assumed by automaton theorists.)
What is the motivation for having that particular set of algorithmic elaborative
abilities as the sense of “in principle” that matters here? Of course I have my eye
over my shoulder at the core program of AI-functionalism as a way of thinking
about what discursive practices are. So, for that one, it is the Turing machines that
matter. That is a sort of low-strategic reason, having to do with what I am going to
go on to discuss, to make that the right general conception to have in play. In fact,
what I need in order to introduce the conditionals is something only very much
weaker than that. The most substantive thing I appealed to was really just response-
substitution. If you actually looked at how you would write the algorithm to do it,
you would use some other abilities, but nothing nearly as sophisticated as a Turing
machine is required to do that. I’m happy enough to have the different notions of
algorithmic elaborability, indexed by what abilities you actually have in that, give us
different notions of PP-sufficiency and so different notions of LX-ness. Because,
however, what I ultimately care about is the possibility of giving an algorithmic
decomposition of discursive abilities into individually nondiscursive abilities, which
is what I’m going to say the claim about AI is, if you are going to worry about that
you might as well have the full set of algorithmic elaborative abilities. That is the set
that’s required for a Turing machine, that can compute everything that can be com-
puted. We don’t know a stronger set than that, and we can build machines that can
do that. So there’s nothing mysterious about that sense of algorithmic elaboration. It
clearly would not be sneaking a ghost in the machine in somewhere at the back if you
assume that full panoply of algorithmic abilities. I am eventually (in the third lec-
ture) going to argue against that pragmatic version of artificial intelligence, and the
argument against that ought to be addressing the most plausibly workable version
of that which is going to allow the strongest set of algorithmic elaborative abilities.

141
That all having been said, I don’t really know how to think about what turns
on drawing the boundaries in one place or another, and I think that’s a really inter-
esting question. For each way of filling in the “in principle” practical sufficiency of
one set of abilities for another will yield a different notion of PP-sufficiency. There
are clearly a lot of them available (“in principle”), and I can only pretend to have
thought about this one.

III. RESPONSE TO PIRMIN STEKELER-WEITHOFER

Let me say something briefly about the last point. It seemed to me striking that
there is a parallel between what by the end I was calling executive algorithmic elab-
oration or decomposition, the one that matters for AI, and the other sort of PP-
sufficiency relation, the notion of practical elaboration by training. For the notion
of algorithmic decomposition still has applicability even in that case, because it
makes sense to talk about pedagogical algorithmic elaboration in the form of train-
ing regimens that are flow charts. The particular source of the interest there was the
question of whether when we move with Wittgenstein to centering our attention
on practical elaboration, extension of our practices by training, whether that means
we have to give up on the idea of pragmatic analysis of anything and just say that
all we can do is describe the practices that we have or the abilities that we have. So
the question arises: is it still possible to see the sort of dependence of one sort of
practice on another that meaning-use analysis was looking for, the sort that Sellars
invokes for instance when he argues that what you have to be able to do in order to
use looks- or appears-talk already presupposes what you have to be able to do in
order to use is-talk? It seemed to me that seeing that even after we have decided
with Wittgenstein to focus our interest on this other kind of PP-sufficiency, practi-
cal elaboration by training, there still is the possibility of algorithmic decomposi-
tion or elaboration there, in the form of thinking about completely solved
pedagogical problems. And that then was my entry into the political issue. That is
how I was thinking about these points as being related.
There are a couple of other small things that I might say about Stekeler-
Weithofer’s deep remarks about the framework in which people are thinking about
AI. As he pointed out, we mostly don’t disagree about that. But he is setting up a
richer background for the discussion. One of his points is to ask whether it really is
true that context-free vocabularies are VP-sufficient to specify Turing machines. He
points out that if we want to answer questions like whether a machine has broken
down or not, we are going to have to step outside of the computer language, out-
side of the context-free language, in order to answer that question. That is true, and
it is very important when we think about practical or, as he said, real AI rather than
what he called utopian AI. But I really was only addressing the project of utopian
AI. The automata that I am talking about are abstract machines; they are not physi-

142
cal machines. So they are not machines for which the question of whether it has
bugs in it arises, or where it is appropriate to ask: Does this chunk of machinery
actually implement this finite-state automaton or this Turing machine? Those
issues just don’t come up.
I am thinking about a conceptual analysis of things. Why ought we not, in a
certain sense, to be surprised that the weaker, context-free languages can say every-
thing that a Turing machine has to do in order to do things, compute functions,
that the automata that can deploy context-free languages cannot do? I think this is
because of the very phenomenon that Stekeler-Weithofer pointed to: in the con-
text-free language, all we have to be able to do is, in effect, describe what it is that
the Turing machine has to be able to do. So, even things that in the context-free lan-
guage you cannot do, you can still have terms that let you say what you would have
to do to do that. In a way it is like the athletic coach, who can describe for the gym-
nast what she needs to do in order to be able to execute a certain maneuver. Even
though the fat middle-aged coach couldn’t perform this maneuver to save his life,
he still might be able to say what it is you need to do in order to perform the
maneuver. That is what you can do in a context-free language: say what it is that the
Turing machine has to be able to do. And it is because being able to say what it is
and being able to do it are two different things that the Turing machine can be
more capable.
Another question is about the appropriateness of the Turing test. Here I think
that there is really a deep issue, and it really has to do with the modality that is
involved in the Turing test. The Turing test is a test for sapience. If you cannot tell,
no matter how extended the conversation is, whether you are dealing with a
machine or a human being, some real deployer of vocabularies, then the system
passes the test. What is the force of that ‘can’? In one sense, of course, the conversa-
tion only lasts as long as it lasts. At a certain point, even if you’re only just, as
Bertrand Russell said, medically limited by the fact that life only lasts so long, the
conversation will come to an end. (I’ve had many conversations with Kambartel
over the years that do not seem to have been medically limited—if conversations
could go on forever, ours do. They are as close as you can get.) But the in some
sense “in principle” medical limitation means that there is always the possibility that
whenever you say, all right, I have said everything I can say, I have tried everything
I could try to discriminate these things and I cannot tell whether it is a human or a
machine, it always could be that the very next thing it would have said would have
tipped you off, would have been the thing that would have let you discriminate.
Does that mean that you can tell, or does it not mean that you can tell? If there is
some path through the conversational labyrinth that you could have taken, if it had
occurred to you, or it had just happened that you had taken that path, if you’d lived
long enough to do that, that would have made the distinction, then in my under-
standing of the Turing test that means that you can tell the difference between them
by engaging the system with conversation. The fact is that you cannot actually be
sure that something has passed the Turing test, that there is no finite amount of

143
conversation that can let you certify the negative existential claim that there is no
discriminating conversational path, though there is a finite amount of conversation
after which you may be in a position to say it has failed the Turing test. I am com-
fortable with the modality that says that nonetheless would count as your being
able, as its being possible, to discriminate them—if there is some conversation that
would have permitted the discrimination. So when I say that I think the Turing test
is a fair test for what I want to call synthetic intelligence, that is, genuine intelligence
that is created in this particular way, it’s that strong modality that I mean. Stekeler-
Weithofer is pointing out in effect that that’s no use to us in actually assessing these
things, at least if we want to certify something as deploying an autonomous vocab-
ulary. We are never really in an epistemic position to do that. This is nontrivially
connected as he at least hinted to the halting problem in more orthodox AI. But the
semantic point I am after concerns when it would be correct to apply the concept
deployer of an autonomous vocabulary, not the epistemic question of how we would
find out.

IV. RESPONSE TO HUW PRICE

I think this is a very difficult issue. Huw Price and I have been trying to hammer this
out, Richard Rorty and I have been trying to hammer this out, and John McDowell
and I have worried about the same issues. We all have deflationary impulses, and
everyone except Price and Rorty has some kind of realistic impulses, too. Those two
think that we should just get over all of these attempts to speak with the vulgar in
the tradition, and we in the middle are stuck, torn both ways. I actually find meta-
physics left in claims such as that we should just stay on the word side of the word-
world divide. I think that that sort of recoil is betraying that one hasn’t sufficiently
freed oneself from the traditional way of thinking. It is moving from a one-sided
objectivism to a one-sided subjectivism. But that is not to say that I know exactly
how one should be in the middle.
Rorty and McDowell and I once agreed to triangulate our positions like this.
Rorty, in Philosophy and the Mirror of Nature,1 argues that the only way to avoid the
pickle modern and contemporary philosophy have gotten into is to banish the con-
cepts experience and representation. Any use of them, he thinks, will start us down
a familiar slippery slope into the abyss. In Making It Explicit, I follow Rorty in care-
fully avoiding any appeal to or use of the term ‘experience’ in discussion of mind
and language. And I do not appeal to representation as a basic concept in my
semantics. But I do go to some trouble to show how to introduce representational
locutions, and to understand what they express, on the basis of the twin notions of
inference and assertion, which serve as the pragmatic bases on which the semantics
is built. I claim that this sanitized notion of representation does not involve any of
the objectionable commitments Rorty so tellingly diagnosed. McDowell, in Mind
and World, is happy to use hygienic notions both of experience and of representa-

144
tion. I do not think that McDowell’s version of experience involves commitment to
the Myth of the Given, or that his (less explicit) embrace of representational locu-
tions involves an objectionable sort of semantic foundationalism or reductionism.
Rorty thinks that the abyss is so threatening that a fence must be erected a quarter-
mile from the edge, to keep the unwary safe. I think that a hundred yards away is
sufficiently prudent. McDowell is happy prancing surefootedly on the very edge of
the precipice like a mountain goat. I don’t think McDowell falls in. But I want to
say: “Kids, don’t try this at home. This man is a professional.” I wouldn’t myself rec-
ommend that anyone try to duplicate his performance. Nonetheless, Rorty objects
to my intermediate view from as it were, the left, and McDowell from the right.
Price is with Rorty on this one.
Here is the sort of thing that I mean about not being sufficiently freed from the
old ways of thinking. When Price says, “In effect, it would tell us why creatures in
our situation would be led to develop a modal physics, even if they inhabited a non-
modal world”—I don’t understand what ‘a nonmodal world’ could conceivably
mean. That counterfactual—what if we inhabited a nonmodal world—makes no
sense if the expressive resources recruited by modal vocabulary are really part of the
very meanings of the words that we use to describe the world. If the claims I am
making in my fourth lecture about the expressive role characteristic of modal
vocabulary are correct, then it is a necessary feature of modal vocabulary that it play
that role. It is not an optional or contingent feature of modal vocabulary. The claim
is that the very idea of empirical descriptive and explanatory talk involves this
modal articulation.
Price wants to say that it is a fact about talking, not a fact about the world. I
don’t understand the game that we’re playing in assigning some features of what
our only understanding of things could be, assigning responsibility of that to us
and saying that that’s because of something about our conceptual activity, our talk,
not some feature of the world. That enterprise, which Kant certainly encouraged
(whether or not in the end he ought to be understood as practicing it himself), the
enterprise of trying to assign responsibility for some of these features to one side
or the other of a word-world divide, is what I don’t understand. One place where
this comes out is this. Price says: “You seem to believe that there is some genuinely
descriptive vocabulary, where that involves representing how things are.” I do. I
think describing is something we do—well, Huw thinks that, too—but I think that
it is right, not just in our ordinary talk, but even in our philosophical talk, to say
that part of describing is representing how things are. Now Sellars says that in the
dimension of describing and explaining, science is the measure of all things, of
those that are that they are, of those that are not that they are not. (This is his
famed “scientia mensura” remark.) He is clear that not everything that we do with
the language is describing. He says also that the descriptive and explanatory
resources of the language advance hand in hand. That is one of the thoughts that
I have been developing in the lecture that we just heard, that explanatory resources
are the ones that take explicitly modal form. Those are the ones that tell us how
facts that we can represent stand in explanatory relations to one another, and how

145
the very idea of a fact, indeed of a way the world could be, is unintelligible apart
from the notion of explanatory connections that we make explicit with modality
between those facts. The job of those modal expressions is not to describe some-
thing. So if by a nonmodal world we mean one where one does not use modal
resources to describe features of the world, then I guess we do live in a nonmodal
world. When I say that it is and must be one that we can only be aware of, describe,
or understand in modal terms, Huw is going to say that we should not think of
those modal features as features of the world. Well, if they are necessarily features
of our talk about the world, if our talk about the world is necessarily modally artic-
ulated, I don’t really understand the point of saying that there is no objective fea-
ture of the world that corresponds to these things. If what we’re saying is that their
job is not to describe, that they have a different job from that of the descriptive
vocabulary, in particular because they are articulating these explanatory rela-
tions—well, I understand and accept that. I do not know what to say at this point.
I do not know how to talk about what to assign responsibility for to the world and
what responsibility to assign to us. I do not know how to play that game, and I am
trying not to play that game. That is on the one hand wriggling and not taking
sides on the opposition Huw offers; on the other hand, I think of it as trying to get
beyond the framework that looks as though you need to come down on the word
side or the world side. One of the things I am trying to do, no doubt clumsily, in
the sixth lecture will be to give us some way of talking that gets outside of having
to come down in a one-sided way.
The questions then are, exactly what is involved, what is required, what is the
best way to turn one’s back on metaphysics? Price wants to do that; I want to do
that, too. But I think that assigning responsibility for some feature of our discursive
practice either to us or to the world is doing metaphysics. It is just doing it in a sub-
jectivist way rather than in an objectivist way. (But that is not to say that I know
how to talk about these things.)
Huw thinks that doing anthropology is doing a kind of biology. I don’t
understand that. I think that social sciences are sciences, too—they are not
reducible to natural sciences, even to biology—and that the last thing you want to
do is be biological in the kind of anthropology you are doing, if it is discursive
practice that you’re talking about. Discursive practice is essentially a kind of social
practice. Biology gets to the verge of the social in things like population biology,
but when it does it is not getting to the social in the normative sense that we need
to talk about discursive practice. So I actually find that an unnecessarily reductive
form of naturalism even if you are biological on the subject side rather than on
the object side. (Ruth Millikan has offered the most sophisticated sort of bridge
between evolutionary biology and the normativity essential not only to practical,
but also to discursive intentionality. I do not pretend to have an a priori objection
to her way of thinking about things.)
Here is one last maybe confession, or form of the confession, about not know-
ing how to talk in this area, which is possibly responsible for the wavering that Price
is seeing. He says that once representational relations become part of our substan-

146
tial theoretical semantics (not just of an account of how these words are used in
ordinary talk, but in our philosophical theorizing), so do their relata. We become
committed to both ends of the relation. That means we must acknowledge the exis-
tence not just of words, but of all the things to which a representational semantics
takes those words to stand in semantic relations: numbers, values, causes, condi-
tional facts, and so on. I accept that. I do think there are numbers. I think there are
norms. I do think that adopting representational talk toward basically declarative
sentences commits you to the existence of facts corresponding to them and to the
objects the facts are about. But a fact for me, as for Frege, is just a claim that is true
(in the sense of a true claimable, whether or not there are corresponding claiming
of it), and if there are true normative claims, then there are normative facts. If there
are true modal claims, then there are modal facts. What I don’t have is a metaphys-
ical account of the nature of facts in general, for instance, that requires them to be
built up in tinker-toy fashion as spatial arrangements of objects or something like
that, which is going to make you puzzle about what kind of a thing a modal fact or
a probabilistic fact or a normative fact could be. I want to say that I’m using ‘fact’
in a much more deflated sense than Huw is when he is worried about us being
committed to these things. When I say that there are numbers, I’m saying that some
arithmetical claims are true and that numerals function as proper singular terms.
That is all I’m saying. I think that is how Frege taught us to think about the ques-
tion of whether there is a certain kind of object—it just is the question of whether
a certain kind of expression, occurring essentially in the expression of truths, func-
tions as a singular term. Huw wants us to say that we should stop talking after we
say that an expression functions as a singular term. That much is okay, that is good
pragmatist anthropology. Do not go on to say that because some of these claims are
true—now understood in a properly deflationist sense—that means that there are
arithmetic facts and numerical objects. But I do not see why not to say that, once
we understand the suitably deflated sense of such claims. These claims are true; a
fact is a claim that is true.
Now what I am not sure about is this. I am not confident that the various
remarks I’ve made about how to turn one’s back on metaphysics and have a relaxed
view about the representational dimension of language, all actually hangs together
and is coherent. I do feel the tension that Huw is diagnosing. But it does not seem
to me necessary to go all the way over to the positive claim that all this stuff is hap-
pening on the side of the words and we have a non-Humean world that is . . . what?
I do not know what is on the object side if I stop thinking about all of these facts
and objects. That’s probably just a complicated confession of confusion, but those
are the things I think I know how to say.

NOTE

1. Princeton University Press, 1979—to be reissued, with an Introduction by Michael Williams, in a


thirtieth anniversary edition in 2009.

147
V. RESPONSE TO JAROSLAV PEREGRIN

Jaroslav Peregrin offers us a sophisticated discussion of the relation between formal


and philosophical semantics and applies it to the incompatibility semantics I intro-
duce in my fifth lecture. Two of his ideas that I find particularly helpful are stated
very early on:

1. “We can say that where the model connects with reality is not semantics,
but inference.”
2. “Semantics is simply our way of seeing the inferential roles as distributed
among individual expressions.”

If I understand him correctly, he is worried that the incompatibility semantics vio-


lates at least the spirit of both of these principles. I do not see that it does.
To begin with, the incompatibility semantics is not offered as a replacement for
an inferentialist semantics, but as a piece of one. It takes its place as part of a pro-
gram of grounding semantics in pragmatics, of understanding meaning in terms of
use. The inferentialist idea is that what makes something a discursive practice, one
capable of conferring conceptual, paradigmatically propositional content on acts and
expressions that play suitable roles in that practice, is that some performances have
the significance of assertions and inferences. That is another way of saying that it
must be intelligible as a practice of giving and asking for reasons. I argue on quite
general grounds that to be a practice of giving and asking for reasons, participants
must in practice distinguish two sorts of normative status: commitments (acknowl-
edged in the first instance by assertings) and entitlements. There are then at least
three basic sorts of inferential relations among assertible contents that can be dis-
tinguished:

• Commitment-preserving inferences,
• Entitlement-preserving inferences, and
• Incompatibility entailments.

I say that two claimable contents are incompatible in case commitment to one pre-
cludes entitlement to the other. And p incompatibility entails q in case everything
incompatible with q is incompatible with p. Thus that Pedro is a donkey in this
sense entails that Pedro is a mammal, because everything incompatible with his
being a mammal is incompatible with his being a donkey. Commitment-preserving
inferential relations are a generalization, to include the case of nonlogical, material
inferences, of deductive inferential relations. Entitlement-preserving inferential rela-
tions are a generalization of inductive inferential relations. And incompatibility
entailments are a generalization of modally robust, counterfactual-supporting
inferential relations.

148
The incompatibility semantics is intended to codify only this third class of
inferential relations. It does not pretend to capture the full inferential roles expres-
sions play, because it ignores the commitment- and entitlement-preserving infer-
ential relations they stand in. But the subclass of inferential relations it does aim to
capture, it aims to capture precisely in Peregrin’s sense of being a “way of seeing the
inferential roles as distributed among individual expressions.” And the picture is
indeed one according to which the model “connects with reality” via what practi-
tioners actually do, the pragmatics, that is, in terms of inference and assertion.
Incompatibility, and the semantics that captures a portion of inferential role by
associating with a sentence the sentences incompatible with it, has no relation to
actual practice that is not mediated by the consequential relations among norma-
tive statuses that it codifies. So I think the two principles Jarolsav formulates are
fully complied with.
He also raises a technical reason for concern along these lines. It can happen in
some settings that an inferential consequence relation that can be formulated fini-
tistically has only infinitistic representations in terms of incompatibilities. This
leads Jaroslav to suspect that the camel of model theory is poking its nose into the
tent of inferentialist semantics. This would be a problem if it meant, for instance,
that we were obliged to appeal to the denotational relations fundamental to model
theory antecedently to and independently of the inferences they support. But I do
not see that they are. Both Peregrin and I see model theory as providing a rich
metalanguage for codifying inferences—in some important ways, more expressively
powerful than traditional proof-theoretic metalanguages for codifying inferences.
Nothing stops us from still understanding them as “ways of seeing the inferential
roles as distributed among individual expressions.” Indeed, Peregrin himself has
done the most important work to show how that can be done for model theoretic
semantics. So even insofar as infinitistic representations of inferences bring model
theoretic considerations into play, I don’t think the inferentialist construal of what
is going on is threatened. And, if one were more worried about this issue than I am,
it is always open to us to say that the fact that some particular incompatibility codi-
fication is, as it were, gratuitously infinitistic just shows that this is not one of the
inferences that should be represented that way. After all, the incompatibility seman-
tics is not meant to represent all inferential relations.
The next point is that there is a formal recipe for going back and forth between
the incompatibility semantics and standard Kripke semantics for modal languages.
All the information about incompatibilities is contained in the minimal incoher-
ent sets. (Two sets of sentences are incompatible just in case their union is incoher-
ent.) And possible worlds correspond to maximal coherent sets. You get S5 if you
impose no further structure, and various other modal logics in the familiar way by
adding a kind of second-order compatibility/incompatibility relation, correspon-
ding to accessibility of worlds. (Kohei Kishida has shown how to represent a wide
variety of other familiar modal logics in the incompatibility framework by adding

149
neighborhoods.) If the two semantic schemes are just notational variants of one
another, where is the philosophical payoff supposed to come from adding the
incompatibility version? The philosophical motivations of the two schemes are
quite different, and so are the philosophical lessons we can draw from them. Three
results can illustrate this.
First, incompatibility semantics treats modal and nonmodal logical vocabulary
on a par. The Kripke framework understands classical logical vocabulary in terms of
truth-at-a-world, and modal logical vocabulary as corresponding to quantification
over worlds. By contrast to that two-stage procedure, incompatibility semantics offers
the same kind of definition of possibility as it does for negation. Each makes explicit
aspects of material incompatibility (non-compossibility). Since material incompati-
bilities are an essential feature of the meaning of any nonlogical vocabulary, they
both show up as explicitating features of discursive practice in general.
Second, the incompatibility semantics is fully recursive, in that the semantic
interpretants of more complex formulae are entirely determined by the semantic
interpretants of less complex formulae. But it is holistic at each level; changing the
semantic value of any sentence can change the semantic values of all the rest. So it
is not compositional. But it is projectible and systematic. This is a combination of
features that Fodor, among others, have argued is not possible.
Third, many consequence relations permit the imputation of an incompatibil-
ity relation, on which basis modal operators can be introduced to make explicit var-
ious features of the consequence relation. This notion of a logic intrinsic to a
consequence relation is a distinctive feature of the incompatibility framework.
I also think the way in which the semantic values available, and hence the for-
mulae that are valid, depend on the expressive power of the language—since infer-
ences are defeated by claims that are incompatible with the conclusion, but not with
the premises, so a less expressive language will invalidate fewer inferences—is both
technically and philosophically interesting. Technically, it means one is working in
effect with variably multivalued logics. Adding new sentences to the language adds
new semantic values, and can alter what follows from what, even among the old
sentences. Philosophically, this set-up gives us a tractable test-bench for investigat-
ing the relations between the expressive power of a language as a whole and the
meanings of its individual sentences.
The final question Peregrin asks concerns how I am thinking about logic. He
offers two alternatives. Either

• “Logic is something essentially nonlinguistic, some very abstract struc-


ture, which can be identified and mapped out without studying any real
language . . . [such that] any possible language is restricted by the struc-
ture discovered by language.”

or

150
• Logic is the study of the meanings of the logical words of actual natural
languages.

He suspects that I reject the supposed exhaustiveness of this classification, and I do.
As I detail in the lectures, I think that logical vocabulary is elaborated from and
explicative of the broadly inferential relations that characterize any vocabularies.
The constraint on possible languages is definitional and pragmatic: if they don’t
have sentences that stand in relations of material inference and incompatibility,
then it is not discursive practices we are talking about. That constraint does not
stem from logic, but underlies it. To be a bit of logical vocabulary, it must be the
case that in being able to assess the material goodness, in various senses, of infer-
ences, one already knows how to do everything one needs to know how to do to
deploy the vocabulary in question. (That is the “elaborated from” part.) And the
expressive role of the vocabulary must then be to let one say that various inferences
are good, in the sense characteristic of the logical locution in question. Thus assert-
ing a two-valued conditional is saying that the inference from its antecedent to its
consequent is good in the sense that it does not have true premises and a false con-
clusion. Asserting an intuitionistic conditional is saying that the inference from its
antecedent to its consequent is good in the sense that there is a recipe for turning a
conclusive argument for its antecedent into a conclusive argument for its conse-
quent. Asserting a strict implication is saying that the inference from its antecedent
to its consequent is good in the sense that it is impossible for its antecedent to be
true and its consequent not. And so on.
It could be that every use of a natural language expressions such as “if . . .
then__” corresponds to the use of some genuinely logical expression. But that cer-
tainly need not be the case. And it wouldn’t matter to the enterprise of logic, which
is to introduce connectives whose inferential and expressive roles are clear,
tractable, and surveyable, and use them to make explicit aspects of the inferential
roles of nonlogical expressions. This broadly metalinguistic role fits neither of
Peregrin’s proferred alternatives. The philosophy of logic traditionally revolved
around two questions (I am thinking here of settings-out such as Quine’s little
book The Philosophy of Logic). The demarcational question challenges us to say
what it is that makes something a bit of specifically logical vocabulary. (So we know
how to classify alethic or deontic modal operators, the set-theoretic epsilon, sec-
ond-order quantifiers, functional expressions, and so on.) The evaluative question
challenges us then to say what the correct logic is. The account of the expressive role
characteristic of logical vocabulary that I offer in Between Saying and Doing (fur-
ther developing the characterization in Making It Explicit) offers a detailed answer
to the demarcational question, in the form of a meaning-use analysis of LX (elab-
orated-explicitating) vocabulary. And on that account, the evaluative question
lapses. The question is not which logic is correct (classical or intuitionistic logic, for
instance), but just what the expressive role of the different connectives is. From my

151
point of view, Peregrin’s partisanship for intuitionism is retrograde: the result of
asking a question we should have gotten beyond.

VI. RESPONSE TO SEBASTIAN RÖDL

Sebastian Rödl is expressing a view that I think is genuinely deep, and delineating
a difference in approaches in philosophy that I think is also genuinely deep. He is
starting with a notion of form, of unity, of principle connected to explanation that
has its origins in Aristotle, that also, as he suggested, animates the German idealists,
and that he and my colleague Michael Thompson have been developing in the con-
temporary age.1 I actually think that the next twenty years is going to be the site of
an epic confrontation between this way of thinking about things and the attitude
of murdering to dissect that I represent here. Though I do not myself make much
of the specialness of the organic case, and so do not usually rely on that class of
metaphors.
What I think is at issue is two forms of explanation. Rödl is quite right in see-
ing what I am doing as a kind of bottom-up explanation, by elements, by construc-
tion. He is opposing that to a kind of top-down explanation, where you explain the
elements in terms of the form, the unity, the principle of the whole thing that one
is talking about. I think it is very important to understand the difference between
and the relations between these different kinds of explanation. It is Sebastian’s work
and Michael Thompson’s work that begins to give me a glimmer about how
Hegelian infinite explanations might contrast with the sort of finite ones that, it is
true, I treat as the gold standard of understanding—not in the sense that I think
everything can be understood that way, or that anything that we cannot understand
that way must forever remain unintelligible. When I call it the “gold standard,” I
mean that when I can get a bottom-up constructive explanation from elements, I feel
I understand things better than when I am understanding the same things in the top-
down way. Sebastian thinks that the most important things—the notions of life, of
intentional agency, of language, of consciousness, of discursiveness—can in principle
only be understood top-down, in terms of the distinctive form that different state-
ments, for instance, about living things, get in virtue of the form of statements about
the biological, and similarly for teleology, for agency, for discursiveness.
I think that he is probably right that to discuss something living or intentional
as such you’ve got to take into account this top-down way of thinking about it, to
pick it out as something living, as something intentional, as something rational,
that you have to pick it out in the kind of vocabulary that he is describing for us. I
think he might be overlooking the possibility that though these forms of explana-
tion are different, and though the home language game of talk of living, of acting,
and of thinking beings might well be in the sort of idiom that he’s talking about, it
is at least not obvious, and it may not be true, that you cannot explain the applica-

152
bility of top-down explanatory vocabularies in bottom-up terms. I don’t know.
Because I feel I understand things better when I can understand them in bottom-
up terms, I am aiming to understand the applicability of the very terms he is talk-
ing about, but to explain them in another vocabulary, in a bottom-up vocabulary.
I think Sebastian might systematically underestimate the extent to which new stuff
happens when you put lots of more basic things together in new structures. He is
confident that because the poor cat that he threw into the fire responds by bursting
into flame does not exhibit anything like the response of a living creature—never
mind intentionality—therefore, no account that appeals to stimulus-response con-
nections could ever exhibit the sort of phenomena that are described in the top-
down vocabularies that he uses. When the bolt falls off of my automobile, I can
hold the thing up and say, this is never going to take me to the airport in the morn-
ing. But that does not mean that putting together in the right way a lot of little
things that will never get you to the airport in the morning doesn’t add up to some-
thing that can take you to the airport in the morning. Of course he is going to hasten
to say that this might be true of mechanism, but that living things are not like that.
Well, maybe so; maybe not.
In particular, the sort of rational teleology that he’s talking about, the sense in
which there is goal-seeking behavior is one case where it seems we have learned a
contrary lesson. One of the striking things about automata that can execute condi-
tional branch-schedule algorithms is that they are goal-seeking. The exit condition
is not, as he characterized it, just ceasing to respond. It is sensible, and I think it is
correct, to understand these systems as casting about, trying all sorts of things, until
and unless a certain goal situation is brought about. I think that there is a kind of
teleology that is exhibited by anything that is describable as having the structure of
executing a test-operate-test-exit cycle, and in fact we see sophisticated goal-seeking
behavior in robots that instantiate such things. Now again, he is going to insist that
there is a big difference between what these artifacts do and what biological things
do—and I do not mean to deny that.
My account of what I call the level of practical intentionality and the way in
which that might be practically elaborated (not now algorithmically elaborated but
practically elaborated) into discursive intentionality, started off with living beings,
the ones that really do have stimuli and responses, their skillful doings, what they
are doing as part of the lived life of an animal, and the way in which objects are
involved in that and goals are sought. But I am animated by an idea that I think was
important to the American pragmatists: that there is a structure in that that we can
see in living organisms, that we can see in evolution, that is common to the learn-
ing of individual organisms, to the evolution of species, that is common also—though
this for obvious reasons was not a big theme for the classic American pragmatists—
to these sort of goal-seeking machines, genuine functional machines, that are not
simply stimulus-response engines, precisely because of the flexibility of conditional
branch-schedule algorithms that are TOTE cycles. The idea was that although it is
true that the details of these particular cases were quite different, living things do it

153
differently, nonetheless they can all be described in a common vocabulary, the
vocabulary of test-operate-test-exit cycles. The challenge is to see whether the fact
that they can be described in this common vocabulary is something that we can use
to explain ultimately the applicability of the kind of more sophisticated top-down
explanatory vocabulary that Sebastian is interested in. I still think that is an enter-
prise that’s worth trying. And I think that the sorts of things that he is led to say—
for instance, that an assertion is never a response to a stimulus—are not clearly
correct. I think it can be enlightening to think about observation reports as inter
alia responses to stimuli, to think of them as having a structure that involves exer-
cising a reliable differential responsive disposition, something that we reporters of
things can share with merely sentient creatures like pigeons but can also share with
clanking artifacts like photo-cells hooked up to the right things, even though the
instantiations of these things is different and much richer vocabularies are applica-
ble to us than to the pigeon and to the pigeon than to the clanking artifact. For
nonetheless, the reason colorblind people cannot noninferentially report the pres-
ence of red things is that they are not properly describable in terms of exercising
reliable differential responsive dispositions to respond differentially to red things.
That is an important thing to realize. Of course we also want to talk about the sig-
nificance of the fact that reporters are applying concepts, and then we are using a
different vocabulary. But the fact that that richer vocabulary applies, and must
apply, for it to be an observation, does not, I think, undercut the intelligibility, the
illumination, that we get from also observing that we can describe what they are
doing in a much sparer, bottom-up vocabulary that already applies in other cases
where the richer vocabulary does not.
Let me end where Sebastian ended, with his proof that he and I are identical.
We have a lot in common, I’m glad to say. His argument turns on the observation
that if he says, “The ball is red,” and I say, “The ball is green,” there is something
wrong. We’re not just okay normatively. Maybe we’re better off than if I say, “The
ball is red, and the ball is green,” but it doesn’t mean that we are off the hook just
because it’s the two of us. In talking about this, let me substitute something that we
really do disagree about—since I don’t know what color he thinks the ball is actu-
ally. He says that if I offer anything other than top-down explanations, what I’m
explaining cannot be a feature of life, of agency, of rationality. I say that I can
explain the applicability of that terminology even though my explanation is
couched in a different vocabulary. Now again, if I said both of those things, I would
be contradicting myself—and so I would be in real trouble. I acknowledge that
there is still some problem when he and I disagree. But it is a different kind of prob-
lem, and what defines us as different subjects is the difference between the norma-
tive pickle that I’m in if I contradict myself, and the normative pickle that we’re in
if we contradict each other. All I need to claim, it seems to me, is that there is a nor-
matively distinctive wrongness that is sufficient to distinguish the subjects.
Now it is a basic Hegelian claim that self-conscious individual subjects and
their communities are simultaneously synthesized by reciprocal recognition. A con-

154
sequence of that is that in the end you cannot understand the two sorts of norma-
tive wrongness, the kind distinctive of the individual subject, the sense in which I
repel incompatible commitments, apart from the sense in which we repel incom-
patible commitments and acknowledge an obligation to sort things out ourselves.
Those are two sides of one coin. I did not talk about that second kind of normativ-
ity, intelligible in principle only as part of a package that includes the kind I did talk
about and vice versa, that Sebastian rightly points to. Those differences are equally
essential in this holistic structure. But, as the differences between Sebastian’s rec-
ommended methodology and mine call normatively for a fruitful dialogue between
us, and may eventually lead to a better understanding, I can only applaud the sense
(Hegel’s “identity in the speculative sense”) in which we are in the end identical.
(In the Afterword to Between Saying and Doing, written after the Prague meet-
ing, I respond at greater length to some of the comments presented there, especially
to the methodological challenges of McDowell and Rödl. Readers interested in
these debates are invited to consult that discussion.)

NOTE

1. Thompson’s views are now more widely available in his wonderful, path-breaking book Life and
Action (Cambridge: Harvard University Press, 2008).

155

You might also like