You are on page 1of 34

A Theory of Manipulative Speech

JUSTIN D’AMBROSIO
Australian National University

Manipulative speech is ubiquitous and pernicious. We encounter it continually in both


private conversation and public discourse, and it is a core component of propaganda,
whose wide-ranging insidious effects are well-known. But in spite of these facts, we
have no account of what exactly manipulative speech is or how it works. In this paper
I develop a theory of manipulative speech. On my view, manipulative speech involves
a deliberate, coordinated violation of the two core Gricean norms of conversation:
Cooperativity and Publicity. A manipulative speaker violates Cooperativity to fur-
ther her goals at the expense of the audience’s. But the manipulative speaker also
violates Publicity in intending, and taking steps to ensure, that her speech appears
cooperative. Thus, in a slogan, manipulative speech is covertly strategic speech. This
definition correctly classifies a wide range of speech acts that we intuitively take to be
manipulative, correctly predicts novel forms of manipulative speech, and yields an
account of propaganda and its mechanisms that is superior to those in the literature.

I come not, friends, to steal away your hearts;


I am no orator, as Brutus is;
But (as you know me all) a plain blunt man
That love my friend, and that they know full well
That gave me public leave to speak off him.
For I have neither writ, nor words, nor worth,
Action, nor utterance, nor the power of speech
To stir men’s blood; I only speak right on.

Marc Antony
Julius Caesar, act III, scene II

Donald Trump should be given the medal of freedom


for speaking his mind in such a bold, honest, and
straightforward manner.

Ted Nugent

Contact: Justin D’Ambrosio <justin.d’ambrosio@anu.edu.au>

1
2 · Justin D’Ambrosio

1. Introduction

Language is a tool of influence. Indeed, the history of political oratory shows that
there is no better tool than language for its exertion—it is immensely powerful.
Marc Antony’s speech turns Caesar’s funeral crowd into a frenzied mob; Donald
Trump tweets and the stock market crashes. But the influence exerted through
such speech is rarely innocent. Marc Antony does not induce a frenzy through
rational persuasion. Trump does not incite crowds and terrify day-traders by
arguing from premises to conclusions. Marc Antony and Trump—like countless
others in both public discourse and private conversation—speak manipulatively,
and it is exactly because their speech is manipulative that it has the power it
does. Manipulative speech has the capacity to subvert our reason, undermine
our agency, and work against our interests, and is standardly taken to be a core
component of propaganda, which has the potential to do all three of these things
on a massive scale.1
But it’s a striking fact that while we know that so much of the speech we en-
counter is manipulative, and we know the harms that manipulative speech can
have, the philosophy of language has no general account of what manipulative
speech is, how it functions, or the variety of forms that it can take. For the most
part, the philosophy of language has focused on speech that is innocent. As
we will see below, the standard account of communication in the philosophy of
language—inspired by Grice—assumes that speech is part of a cooperative enter-
prise, and that speakers aim to make public how their speech acts contribute to
the common goal of conversation. As a consequence, the philosophy of language
has largely overlooked manipulative speech as an important category of inves-
tigation. And even when it has focused on manipulative speech, it has focused
only on particular manipulative speech acts, leaving what such speech acts have
in common—i.e. their manipulative element—unexplained.
This paper develops a theory of manipulative speech. The first component is
a definition of manipulative speech formulated in terms of a speaker’s intentions.
On my view, manipulative speech involves a deliberate, coordinated violation
of the two core Gricean norms for conversation: Cooperativity and Publicity.
Speakers often have goals that conflict with the goals that their interlocutors
have for a conversation. An audience may have the goal of gaining knowledge
concerning a topic of discussion, but a speaker may have reason not to cooperate
with this goal. In such cases, it will often be rational for the speaker to violate
the norm of Cooperativity—it will be rational for them to speak strategically. But
1 Foraccounts of propaganda’s achievements and effects, see Bernays [1928], Herman and
Chomsky [1988], Chomsky [1991], among many others. Nearly everyone who has attempted
to define propaganda agrees that it is closely related to manipulative speech. I discuss this
connection in §6.
A Theory of Manipulative Speech · 3

when a speaker speaks strategically, it will also often be rational to appear to


be fully cooperative, hiding the fact that their speech is strategic. In doing so,
the speaker violates not only the norm of Cooperativity, but also the norm of
Publicity. This is when speech is manipulative. Thus, in a slogan, manipulative
speech is covertly strategic speech.
The second component of the theory is an account of the primary mechanism
by means of which manipulative speech operates. Speakers manipulate audi-
ences by cultivating and exploiting linguistic trust. Linguistic trust is not merely
trust that a speaker is telling the truth; it is trust that a speaker is being fully
cooperative with the audience’s goal for the conversation. When an audience is
linguistically trusting, they take the speaker to be fully—or at least sufficiently—
cooperative, and so take the speaker’s speech acts to be non-strategic. Isolating
linguistic trust as the mechanism of linguistic manipulation reveals a suite of
tactics that speakers use to cultivate such trust, and it also reveals which kinds
of audiences are most susceptible to linguistic manipulation.
This theory has important consequences for the philosophy of language and
for political philosophy. In the philosophy of language, this definition makes
precise predictions about which forms of speech qualify as manipulative—any
speech that is deliberately non-cooperative, when hidden, can serve as an in-
stance of manipulative speech. These predictions are borne out across a wide
range of cases. The definition correctly classifies—and reveals the manipula-
tive structure common to—many forms of speech that we intuitively take to be
manipulative, including lying, misleading, dogwhistling, bullshitting, flattering,
and distracting. It also correctly predicts novel forms of manipulative speech—
including a form of manipulation via underspecification that I call “pied piping”.
Thus, the definition plausibly singles out an important natural kind of speech
that falls outside of the scope of standard Gricean theories of conversation.
Turning to political philosophy, the theory serves as the basis for a novel ac-
count of propaganda and the mechanisms by means of which it operates. On the
view I develop, propaganda is a manipulative contribution to public discourse:
a contribution that is strategic, but made to appear cooperative with the public’s
goals. Propagandists thus hide the true aims of their speech: Mark Antony de-
ceives the funeral crowd by insisting that he is a plainspoken, blunt man who has
no ulterior motives; Trump deceives his audience by coming across as a guileless
politician who speaks his mind—a straight shooter. This makes clear the sense in
which propaganda irrationally influences public opinion. By cultivating and ex-
ploiting the public’s linguistic trust, the propagandist deceives the public about
whether a contribution to public discourse is instrumental to their own goals,
and in so doing brings about his desired ends.
4 · Justin D’Ambrosio

2. Manipulation and the Gricean Model of Conversation

Manipulative speech is a form of manipulation, or at least a tool that aids in


it. So one might think that the easiest way to develop a theory of manipulative
speech would be to take our best theory of manipulation, take our best theory
of linguistic communication, combine them, and voilà: a theory of manipulative
speech.
Unfortunately, things are not so simple. At first glance, our best theories
of manipulation and our best theories of linguistic communication appear to
be altogether incompatible. Manipulation is standardly—even definitionally—
taken to be a form of influence that is not fully rational, but is also not coercive.2
Consider the characterization of manipulation given by Joseph Raz:
[M]anipulation, unlike coercion, does not interfere with a person’s
options. Instead it perverts the way that person reaches decisions,
forms preferences, or adopts goals. [Raz, 1988]
Here Raz contrasts manipulation with both rational persuasion and coercion, in-
dicating that manipulation is not fully rational insofar as it perverts the way that
an agent deliberates. The view that manipulation perverts—or perhaps partially
subverts—rational deliberation is widely shared, and is often treated as a con-
straint on an adequate definition of manipulation.3 But turning this constraint
into a definition requires specifying how manipulation interferes with rational
deliberation.
The dominant view is that manipulation exerts its influence covertly, by
means of trickery or deception, and through this means influences agents in
ways that work against their interests.4 Claudia Mills’ characterization of manip-
ulation illustrates both of these points:
We might say, then, that manipulation in some way purports to be
offering good reasons, when in fact it does not. A manipulator tries
2 Here and throughout, I use “interests” in the narrow, internalistic sense, to mean something

like “desires”, “things a person takes to be in their interest”, or “goals”. I make this choice for
two reasons. First, construing “interests” in this way allows for the possiblity of paternalistic
manipulation. I take paternalistic manipulation to be manipulation that attempts to influence
someone in ways that are in their interests, but against what they take their interests to be, or at
least against what they immediately want. Second, this usage is consonant with the literature
in decision and game theory, which generally treats utilities in this narrow or internal sense. I
return to this point in §3.
3 In addition to Raz, Noggle [1996], Barnhill [2014], Hanna [2015], Stanley [2015], and

Sunstein [2016] all hold the view that manipulation is a form of influence that perverts or
partially subverts rational deliberation.
4 But there is disagreement over whether manipulation must be covert. A signficant minority

of theorists—including Kligman and Culver [1992], Noggle [1996], Barnhill [2014], and Hanna
[2015]—hold that manipulation need not be hidden, and may operate out in the open. I return
to this point at length in §4.
A Theory of Manipulative Speech · 5

to change another’s beliefs and desires by offering her bad reasons,


disguised as good, or faulty arguments, disguised as sound—where
the manipulator himself knows these to be bad reasons and faulty
arguments. [Mills, 1995, p. 100]

On Mills’ view, manipulation is covert—it involves deceiving an agent about the


quality of certain reasons. Further, insofar as bad reasons are ones that, if acted
upon, would not lead to the realization of the agent’s interests, manipulation can
be seen as bringing about a failure of instrumental rationality—it gets the agent
to act in ways that conflict with her own interests or goals. Mills’ view thus serves
to illustrate the contours of the standard view of manipulation: manipulation is
a form of influence that aims, through hidden or covert means, to influence a
person in ways that work against her interests.5
The dominant picture of communication within the philosophy of language,
which can be can be traced back to Grice [1989b], could not be more at odds
with manipulation so characterized. The outlines of Grice’s theory are familiar:
a speaker means something by an utterance just in case the speaker utters words
intending to communicate a particular content, and to do so on the basis of the
recognition of that very intention. The audience has understood the speaker
just in case they recognize the speaker’s intention. Sometimes, speakers mean
things that differ from what they say, and what they mean is recoverable by audi-
ences through reasoning about the goals and actions of rational agents engaged
in conversation. In this way, Grice is often said to have situated the theory of
conversation within the theory of rational human activity more generally.
There are two principles that are essential to the way that Grice locates con-
versation within the theory of rational behaviour: Cooperativity and Publicity.
These principles serve both as norms governing speakers as well as presump-
tions that audiences rely on in determining what those speakers mean.6 Grice
also held that conversation is an enterprise with a goal, and that the principles
of Cooperativity and Publicity hold relative to a particular conversational goal.
Roughly, Cooperativity says that your contributions to the conversation should
further the goal of the conversation, and Publicity requires that you make it pub-
lic how they do so.
The problem is that this conception of communication—and any view of
communication that incorporates its basic assumptions—appears to preclude the
5 While their views differ in the details, Kasten [1980], Goodin [1982], Scanlon [1998],
Sunstein [2016], and Cohen [2018] all hold views of this general form.
6 See Bach and Harnish [1979], Bach [2005], and Szabó [2020] for discussion. These principles

are normative insofar as they specify which contributions to conversation are felicitous, or
as Szabó puts it, they spell out what it is for a conversation to go smoothly. But they are also
descriptive: Grice clearly thinks that conversation is standardly or by default cooperative,
which is why we are justified in presuming Cooperativity when we interpret speakers.
6 · Justin D’Ambrosio

very possibility of manipulative speech. On the Gricean model, conversation is


cooperative, public, and rational—but manipulation, it appears, is none of these
things. In order to move past this impasse, we will need to discuss the pillars of
the Gricean model in more depth.

2.1. Cooperativity
Grice phrases his Cooperative Principle as follows:

“Make your contribution such as is required, at the stage at which it


occurs, by the accepted purpose or direction of the talk exchange in
which you are engaged.” [Grice, 1989b, p. 26]

Grice then goes on to present various “maxims” that spell out what it is to abide
by the cooperative principle: Quantity, Quality, Relation, and Manner.

Quality : Try to make your contribution one that is true.

(i) Do not say what you believe to be false.


(ii) Do not say that for which you lack evidence

Quantity :

(i) Make your contribution as informative as is required for the current


purposes of the exchange.
(ii) Do not make your contribution more informative than is required.

Relation :

Be relevant.

Manner : Be perspicuous.

(i) Avoid obscurity of expression.


(ii) Avoid ambiguity.
(iii) Be brief (avoid unecessary prolixity).
(iv) Be orderly. [Grice, 1989b, pp. 26-27]

These maxims specify what can be expected from cooperative speakers, and in
so doing, serve as a guide for interpreters in trying to calculate what a speaker
means by a particular utterance. When what a speaker says obviously violates
one of these maxims, the Gricean picture does not invite us to conclude that the
speaker is in fact being uncooperative. The Gricean model presumes that speak-
ers are being cooperative, and uses this presumption as a premise in reasoning
A Theory of Manipulative Speech · 7

to the conclusion that the speaker must have meant something other than what
they said.
In spelling out what it is for a speaker to be cooperative, these maxims appear
to place a collection of constraints on contributions to a conversation that must
be balanced against each other. Thought of in this way, the maxims, and so the
cooperative principle, appear to specify what it is for a participant to make an
optimal contribution to a conversation with a particular goal. With this idea in
mind, we can formulate the principle of Cooperativity as follows:

Cooperativity: Ensure that your speech acts in the conversation contribute opti-
mally to the goal of the conversation.

This way of thinking about Grice’s principle is intuitive, has a long history, and
is part of a well-developed program within pragmatics.7 But admittedly, it is
controversial.8 I adopt this formulation for definiteness, and because it has the
advantage of allowing for degrees of cooperativity. But the view of manipulative
speech I present below can be expressed using a variety of alternative formula-
tions.

2.2. Publicity
The principle of Publicity is not one that Grice states explicitly. However, for
the recovery of a speaker’s intentions to be possible, Publicity must be in effect.
Following Szabó [2020], we can state the principle as follows:

Publicity: Ensure that it is common knowledge what speech acts are performed
in the conversation and what the goal of the conversation is. [Szabó, 2020]9
7 The idea that the cooperative principle and its associated maxims are best understood

as optimality constraints is developed at length by Blutner [2000], Blutner and Zeevat [2004],
Blutner et al. [2006], and contrasted with the game-theoretic approach by Franke [2009, Ch.
4]. The OT approach to pragmatics builds on Horn’s [1984] reduction of the Gricean maxims
to his principles Q and R, which together can be seen as specifying the notion of an optimal
contribution to a conversation. Similar principles—the Q and I principles—are discussed by
Levinson [1983].
8 There are many alternative formulations of the principle corresponding to different frame-

works that incorporate Grice’s basic insight. Szabó [2020] phrases Cooperativity as follows:
“Ensure that it is common knowledge how speech acts in the conversation contribute to the
goal of conversation.” Asher and Lascarides [2003, 2013] state what they call the rule of Strong
Cooperativity: “Normally, if A M-intends that φ, then B should intend that φ.” Game-theoretic
accounts of the Gricean program implement cooperativity by treating the sender and receiver
as having aligned preferences (see, for instance, Franke [2009, p. 126]).
9 We might also add the requirement that speakers make it common knowledge how their

speech acts contribute to the goal of the conversation, but for now I will set this complication
to one side.
8 · Justin D’Ambrosio

The reason that Grice must endorse the principle of Publicity is that in order
to engage in pragmatic reasoning, interlocutors need to be able to determine
whether a speech act complies with Cooperativity. If Publicity is not satisfied,
participants cannot determine this. Thus, Publicity is entailed by the very pos-
sibility of calculating an implicature. However, as with Cooperativity, there is
controversy over how to best formulate Publicity, and I adopt this formulation
only for definiteness. The account of manipulative speech I present below can be
recast using a variety of different formulations.10
Cooperativity and Publicity can both be seen as norms that govern conver-
sational participants insofar as those participants have a common conversational
goal. If they do have such a goal, then Cooperativity simply exhorts interlocu-
tors to pursue that goal optimally. Further, if interlocutors know that they have a
common goal, then they will be justified in presuming Cooperativity when they
interpret one another. This reasoning likewise extends to Publicity. If partici-
pants have a common goal, then it will be rational for them to make clear what
that goal is, and how their contributions to the conversation promote it. If one
does not make public which speech act one is engaged in, and what one’s goal
is in undertaking that act, then one’s conversational partner will not reliably be
able to determine what is meant, and so the goal of the conversation will not be
achieved.

2.3. The Goal of Conversation


Together, the principles of Cooperativity and Publicity encourage speakers to
contribute optimally to the goal of the conversation, and to make public how
they do so. But what is the goal of conversation? Many philosophers of language
inspired by Grice hold that conversation often, or perhaps even typically has the
goal of sharing knowledge:

Shared Knowledge: The goal of conversation is to share private knowledge per-


taining to a topic of common concern. [Szabó, 2020]

In presenting his basic approach to conversation and implicature, Grice himself


says that he is operating primarily with the goal of exchanging information in
10 Analternative is to formulate Publicity not in terms of speech acts, but in terms of a
speaker’s intentions as follows: “Ensure that your intentions in making a contribution to the
conversation, and the goal you pursue in making that contribution, are common knowledge.”
Within game-theoretic pragmatics, and pragmatic theories that build on optimality theory,
Publicity is standardly taken for granted: most such theories presume that a speaker’s signals
and their payoffs or goals are common knowledge. However, see Franke and van Rooij [2015]
and Asher and Paul [2018] for formal approaches that do not presume Publicity.
A Theory of Manipulative Speech · 9

mind, and work in pragmatics and the philosophy of language has largely fo-
cused on linguistic exchanges that have this goal.11 Putting this goal together
with the above principles yields the Gricean picture of conversation: conversa-
tion is a joint enterprise in which interlocutors aim to optimally and publically
contribute to the goal of sharing knowledge pertaining to a topic of common
concern.
The idea that the goal of conversation is to share knowledge is a core com-
ponent of a prominent development of the Gricean program due to Robert Stal-
naker.12 On the Stalnakerian model, conversational participants begin conversa-
tion with a store of common knowledge, represented by a set of possible worlds.
Topics of conversation are represented by questions under discussion, which par-
tition that set into cells—the possible answers to the question.13 Cooperativity
can then be formulated as the requirement that speech acts in a conversation con-
tribute (sufficiently, adequately, or perhaps optimally) to answering the question
under discussion by ruling out potential answers. Publicity can be defined as the
requirement that speakers make it common knowledge how each of their speech
acts works toward this goal. Conversation then proceeds sequentially, with inter-
locutors taking turns sharing knowledge, until the store of common knowledge
entails a complete answer to the question under discussion.
But the idea that the goal of conversation is to share knowledge is idealized
in two ways that will become important as our discussion proceeds. First, while
it may be the case that sharing knowledge is standardly the goal of conversation,
even Grice concedes that conversation will often not have this goal. Instead, the
goal of our conversation may be for us to amuse or express ourselves, for me
to direct you in some task, for us to bond, or perhaps for us to form the basis
for future cooperative endeavors. Second, and even more importantly for our
purposes, the assumption that conversations have a single goal, and that this
goal is shared among participants, excludes conversations in which participants
have goals that conflict—it excludes speech that is not fully cooperative from the
scope of the Gricean theory. Developing a theory of manipulative speech requires
us to consider conversations in which this this assumption does not hold, and it
is to such conversations that we now turn.
11 See the first pages of Grice [1989a], Stalnaker [2014], and Szabó [2020]. The idea that
the goal of conversation is to share knowledge is built into many pragmatic frameworks and
models. For criticisms of this view see Beaver and Stanley [2019].
12 See Stalnaker [1998, 2002, 2014]. Stalnaker’s model of conversation treats it as a sequential,

cooperative game of perfect information.


13 The idea that the topic of conversation can be represented by a question under discussion

is due to Roberts [1996].


10 · Justin D’Ambrosio

3. A Definition of Manipulative Speech

3.1. Strategic Speech


In conversation, speaker and audience often have conflicting goals. An audience,
for instance, may have the goal of gaining knowledge concerning a topic of mu-
tual concern, but a speaker may have reason not to provide them with it. In such
a case, it will often be rational for the speaker to speak in a way that conflicts
with the audience’s goal for the conversation. Call such speech strategic speech.
Economists and political scientists have long been interested in strategic speech.
For instance, the seminal paper of Crawford and Sobel [1982] inspired a range
of work examining the equilibria that arise in strategic signalling games; i.e. sig-
nalling games in which there is a conflict of interest between sender and re-
ceiver.14 Likewise, political scientists have examined the dynamics of strategic
speech in electoral competition, where it is common for politicians to speak in
ways that conflict with voters’ goals.15 More recently, the philosophy of lan-
guage has begun to focus on strategic speech as well, and there has been much
discussion of how accommodating strategic speech requires us to modify our
idealized models of conversation.16 Philosophers and linguists have even de-
voted some effort to understanding particular forms of manipulative speech in
politics—particularly dogwhistles and propaganda.17
Before formulating a precise account of strategic speech, it will be helpful to
consider some examples.

Antique Sale: I am selling you an antique. You want me to share my


knowledge concerning the value of the antique, because you want to
pay me only as much as the antique is worth. But I want you to pay
me more than the item is worth—I want you to pay me as much as
14 See,
for instance, Farrell and Rabin [1996], Feinberg [2011], Kamenica and Gentzkow [2011]
and Blume and Board [2014]. In signalling games, Cooperativity is standardly implemented
by treating the sender and receiver as having aligned utilities. Strategic signalling games are
games in which the utilities of the sender and receiver are misaligned. This misalignment is
intended to model conversations in which speakers are deliberately non-cooperative.
15 The literature on strategic speech in politics is extensive. See Aragonès and Neeman [2000],

Aragonès and Postlewaite [2002], Dickson and Scheve [2006] and Tomz and van Houweling
[2009] for representative examples.
16 For instance, Lee and Pinker [2010], Fricker [2012], Asher and Lascarides [2013], Asher

et al. [2017], Asher and Paul [2018], Camp [2018], Langton [2018], Beaver and Stanley [2019],
Cappelen and Dever [2019], McGowan [2019], and Quaranto and Stanley [forthcoming]. Work
on slurs and derogatory speech also plausibly falls into the category of strategic speech—see
Bolinger [2017], Bach [2018], and Nunberg [2018] for discussion.
17 See Franke and van Rooij [2015], Henderson and McCready [2018], Saul [2018], and

Stanley [2015] for discussions that touch on manipulative speech. As we will see below, on my
definition, lying and misleading also qualify as manipulative. See Fallis [2009], Saul [2012], and
Stokke [2013] for an overview.
A Theory of Manipulative Speech · 11

I can get you to pay. In such a case, my monetary goal makes it so


that it is not rational for me to be fully cooperative with respect to the
goal of sharing knowledge concerning the antique’s value. Rather, it
is rational for me to speak in a way that gets to you give me as much
money as possible.
Cases of sales and bargaining like Antique Sale are perhaps the simplest cases of
strategic speech. Indeed, the case of bargaining is what motivated Crawford and
Sobel’s original model of strategic information transmission. In this case, your
goal is to know the antique’s true value, but given my goal, it is not rational for
me to be fully cooperative. Rather, it is rational for me to speak in a way that is
not optimal for furthering your goal—it is rational for me to speak strategically.
In their model of such cases, Crawford and Sobel show that it is rational for me
to be strategically vague or general concerning the antique’s value, and that the
more our interests conflict, the less information I will reveal.18
Bargaining and sales aren’t the only examples of strategic speech; such speech
is also ubiquitous in politics:
Town Hall: A politician speaks to his constituency at a town hall. The
people in his constituency are interested in getting straight answers
concerning his policies and positions, so they can evaluate whether
his actions will further their interests. His interest, however, is in
getting his constituents—as many of them as possible—to vote for
him, while also keeping his policy options open once in office, and
limiting his accountability for lies. The politician wishes to speak in
a way that maximizes votes and minimizes commitments.
In this case, it is clearly rational for the politician not to contribute optimally to
the audience’s goal of coming to have knowledge concerning his plans. Again
it is rational for him to speak strategically. There are many forms of strategic
speech that the politician can use to achieve his goal. In this case it’s plausible
that the politician will speak vaguely or ambiguously, or perhaps in ways that are
highly general. This will allow him to maximize his appeal while simultaneously
minizing the likelihood that he will incur the costs of breaking a promise or
being caught lying. Such cases are discussed at length, and modelled formally,
by Aragonès and Neeman [2000].
Finally, consider the following example from Solan and Tiersma [2005], in
which a prosecutor is cross-examining a defendant, Bronston:
Cross-Exam:
Prosecutor: Do you have any Swiss bank accounts, Mr. Bronston?
18 More precisely, what they demonstrate is that the information that I reveal is a strictly
decreasing function of the degree to which our interests conflict, and so equilibrium signalling
is more informative when the preferences of sender and receiver are aligned.
12 · Justin D’Ambrosio

Bronston: No, sir.


Prosecutor: Have you ever?
Bronston: The company had an account there for about six months,
in Zurich. [Solan and Tiersma, 2005], as quoted in [Asher, 2012]

In this case, the goal of the prosecutor—and indeed of most people in the court,
including the judge and jury—is to come to know whether Mr. Bronston has,
or has ever had, a Swiss bank account. Mr. Bronston has a goal—the goal of
avoiding jail time—that conflicts with the prosecutor’s goal, so he speaks strate-
gically: he says something that does not directly answer the question. Of course,
in doing so, Bronston also implicates a negative answer to the question, and so
attempts to mislead the prosecutor. We will discuss this case further below.
In each of these cases, a speaker knowingly speaks in a way that conflicts
with the audience’s goal of coming to have knowledge concerning a mutually
salient topic, and in doing so, deliberately speaks in a way that is not fully co-
operative. But such speech does not fit into the Gricean account of conversation
outlined above. The Gricean approach to conversation assumes that conversa-
tions have a single, shared goal—typically understood to be the goal of sharing
knowledge—and assumes that interlocutors are cooperative with respect to this
goal. But in strategic speech, speaker and audience have distinct, conflicting
goals for the same conversation.19 Without specifying how the idea of the goal of
conversation extends to cases in which conversational goals conflict, the Gricean
cannot even say that strategic speech violates Cooperativity, since the principle
of Cooperativity is defined in terms of the goal of conversation.
In order to account for divergent conversational goals, I propose that we
revise the Gricean conception of the goal of the conversation as follows:

(1) Let the salient goal of the conversation be the goal G such that
a. the audience’s goal for the conversation is G, and
b. the speaker knows that the audience’s goal for the conversation is G.

This revision is natural, given the dynamics of the cases just discussed, and it al-
lows us to dispense with the two idealizations introduced by the assumption that
the goal of conversation is to share knowledge. First, the idea that a conversation
has a salient goal does not entail that all participants in the conversation share
that goal. As a consequence, this approach accommodates the fact that in each
of the above cases, the speaker does not share what he knows is the audience’s
19 HereI am making two simplifying assumptions that I will later discharge. First, I am
assuming that audiences are unified or monolithic, so that they can be seen as having a single
goal for the conversation, as opposed to multiple conflicting goals. Second, I am assuming that
all strategic speech occurs as part of conversation. I will discharge both of these assumptions in
§6, when I discuss how my account can be extended to yield an account of propaganda.
A Theory of Manipulative Speech · 13

goal for the conversation.20 Second, the salient goal of the conversation need not
have anything to do with sharing knowledge. While in each of the cases we have
discussed, the salient goal is to share knowledge, the salient goal could just as
easily have been something altogether different.21
With the notion of a conversation’s salient goal in hand, the principles of
Cooperativity and Publicity can be reformulated as follows:

Cooperativity: Ensure that your speech acts in the conversation contribute opti-
mally to the salient goal of the conversation.22

Publicity: Ensure that it is common knowledge which speech acts you perform
in the conversation and what your goal for the conversation is.

These revised principles are well-defined in cases where the conversational goals
of speaker and audience conflict, which will allow these principles to serve as the
background against which we can formulate our account of manipulative speech.

3.2. Strategic Linguistic Intentions


The first step in developing a theory of manipulative speech is to provide a def-
inition of strategic speech. Using the idea of a conversation’s salient goal, to-
gether with our revised principle of Cooperativity, we can formulate an account
of strategic speech in terms of the intentions of a strategic speaker:

A speaker S utters U to an audience R with a strategic linguistic intention


if and only if:

S1 S intends that U cause R to undertake some course of action A;


20 In contrast to the goal of conversation, the salient goal of conversation is not shared, and so
will vary depending on who is speaking, and who the audience is. In strategic speech in which
interlocutors’ goals are common knowledge—in mere strategic speech—this is as it should be:
each participant speaks in a way that is deliberately not optimal for pursuing what they know
is the other’s goal. However, in manipulative speech, there is only one salient goal, because the
manipulated party is deceived about the speaker’s goal. See fn. 26.
21 This revision is consistent with a variety of different approaches to accommodating strate-

gic speech. It is consistent with the idea, strongly suggested by the Stalnakerian framework, that
the goal of strategic conversation is still to share knowledge, but that the goal is not one that
need be shared by all conversational participants. The Stalnakerian can endorse this revision by
insisting that the salient goal is to share knowledge. Is also consistent with approaches that do
not treat conversation itself as having a goal, but instead treat participants as having distinct,
potentially conflicting goals. This is the approach taken, for instance, by Asher and Lascarides
[2013], who claim that participants in a conversation are being cooperative if and only if, and
to the degree to which, their goals for the conversation are aligned.
22 Defining Cooperativity as acting to further the goals of the audience is of a piece with

work in game-theoretic pragmatics, where it is common to assume that the utilities of the
audience are what fix the question under discussion [van Rooij, 2003, Franke, 2009].
14 · Justin D’Ambrosio

S2 In order to fulfill the intention in S1, S utters U intending to be less


than fully cooperative with respect to the salient goal of the conversa-
tion, G; and
S3 S intends S1 and S2 because S has goal G∗ that conflicts with G.

[S1] indicates that a strategic speaker utters some words intending to bring about
a particular action. While I think it is plausible that strategic speech is oriented
toward action, [S1] can be generalized so that it applies to any effect that a speech
act can have—any perlocutionary effect. [S2] spells out the idea that strategic
speech involves a deliberate violation of Cooperativity. If the speaker knows
that the audience’s goal for the conversation is G, the speaker utters U intending
to be less than fully cooperative with G—deliberately making U non-optimal
for pursuing G—in order to achieve the intended perlocutionary effect. Finally,
[S3] spells out the reasons the speaker is less than fully cooperative with G: it
is due to the fact that the speaker has some other goal, G ∗ , satisfaction of which
conflicts with satisfaction of G. Accordingly, the speaker intends to bring about a
perlocutionary effect A that furthers G ∗ by speaking in a way that conflicts with
G. We can then say that a speaker speaks strategically if she utters words with a
strategic linguistic intention.
In the three cases just discussed, it is clear that each of these conditions is
met. And on certain ways of spelling out the details of our examples, it may be
the case that the speakers are merely speaking strategically. However, as we will
see presently, on the most natural understanding of these cases, each speaker is
not only speaking strategically, but also trying to manipulate their audience. I
now turn to the question of what distinguishes manipulative speech from mere
strategic speech, and delay further discussion of our examples until §3.4.

3.3. Covert Speech


We saw above that strategic speech is aimed toward bringing about certain kinds
of perlocutionary effects—paradigmatically, bringing it about that the audience
acts in a certain way. But some speech acts can only achieve their perlocutionary
aims if the audience remains in the dark about how the speaker manages to bring
them about. Call such speech acts covert speech acts. It is in being covert that
manipulative speech differs from mere strategic speech.
Jennifer Saul [2018] defines covert speech acts as follows:

A covert speech act is a speech act that can bring about its perlocutionary
aim only if it is not recognized by the audience as being that speech act.
[Saul, 2018]

Speech acts in this category include lying, misleading, bullshitting, deceiving,


covert dogwhistling and (maybe) flattering, among others. Covert speech acts
A Theory of Manipulative Speech · 15

violate Publicity. Publicity requires that a speaker make it common knowledge


which speech act she is engaged in, and what her goal is in undertaking that
speech act. But in the kinds of speech acts just listed, it is essential to the success
of the speech act—essential to bringing about its perlocutionary aim—that it not
be recognized as the speech act that it is. If a speaker is recognized as lying or
misleading, for example, the perlocutionary aim of the speech act will not be
achieved.
However, there is an alternative definition of covertness that is more general
than Saul’s, and which subsumes her definition as a special case. Recall that the
principle of Publicity has two components: make it common knowledge which
speech acts you perform, and what your goal for the conversation is. Covert
speech acts, on Saul’s definition, involve a violation of the first component. The
alternative definition of covertness involves violating the second component:

A token speech act is covert if and only if it can bring about its perlocu-
tionary aim only if the goal with which that speech act is undertaken is not
recognized by the audience.

To see how these definitions differ, recall our politician from Town Hall. In Town
Hall, the politician may well hide his goal in undertaking a speech act without
hiding which speech act he undertakes. For instance, the politician might strate-
gically assert something true, but very general, in the hope of undertaking the
fewest commitments possible. This speech act token can be covert in the second
sense—he may attempt to hide that he is trying to avoid commitment—even if he
is recognized as making an assertion, and even if assertion is not a covert speech
act in Saul’s sense.
Any token of a speech act type that is covert in Saul’s sense must be covert
in my sense as well. Whenever a speech act must go unrecognized in order to
be successful, it must work toward a hidden goal. But as the case just discussed
shows, the converse is not true. Thus, the second definition of covertness sub-
sumes the first as a special case. Manipulative speech is covert in this second,
more general sense. In manipulative speech, what is hidden is the speaker’s goal
in undertaking that speech act, and whether this goal conflicts with the salient
goal of the conversation.
Before seeing how this proposal can be spelled out, one might rightly ques-
tion whether manipulative speech—and indeed, whether manipulation more
generally—is always covert. There appear to be many instances of manipulation,
in speech and otherwise, that are not covert, and a range of theorists working
on manipulation hold that covertness is not necessary for manipulation.23 Con-
23 Barnhill [2014], for instance, defines manipulation as “directly influencing someone’s

beliefs, desires, or emotions such that she falls short of ideals for belief, desire, or emotion in
ways typically not in her self-interest or likely not in her self-interest in the present context.”
16 · Justin D’Ambrosio

sider cases of being guilted, emotionally manipulated, charmed, or having one’s


views influenced by repeated or extensive exposure to a message of a particular
kind. Each of these forms of influence is arguably manipulative. Each appears
to be a form of influence that is neither fully rational nor coercive. And we are
aware of how each of them exerts its influence. But nonetheless, these modes of
manipulation are effective. Call these instances of bald-faced manipulation.24
In developing a theory of manipulative speech, I will for the most part set
bald-faced manipulation aside. I think it is plausible that many cases of bald-
faced manipulation are in fact closer to coercion than manipulation, and so may
not warrant the title of manipulation at all. However, even if bald-faced manip-
ulation does warrant the title, my intention is not to offer a conceptual analysis
of manipulation, or to provide necessary and sufficient conditions for applica-
tion of the phrase “manipulative speech”, as it is used in English. Rather, my
intention is to provide a definition that captures a central strand of the concept
of “manipulative speech”—a strand which subsumes a broad range of linguistic
phenomena that we intuitively classify as manipulative, and which is interest-
ing and theoretically fruitful. I thus leave discussion of bald-faced manipulative
speech for another time.

3.4. Covertly Strategic Speech


We now have accounts of both strategic speech—speech that works to further
a goal that conflicts with the salient goal of conversation—and covert speech:
speech undertaken to further a goal that is hidden from the audience. Combin-
ing these two accounts, we can define manipulative speech as covertly strategic
speech; manipulative speech is speech that is intended to further an ulterior goal
that conflicts with the salient goal of conversation. This leads to the following
definition:

A speaker S utters U to R with a manipulative linguistic intention if and


only if:

M1 S intends that U cause R to undertake some course of action A;


M2 In order to fulfill the intention in S1, S utters U intending
(a) to be less than fully cooperative with respect to the salient goal of
the conversation, G, and
Such direct influence need not be covert. Others who hold the view that manipulation need not
be covert are Kligman and Culver [1992], Noggle [1996], and Hanna [2015].
24 Bald-faced manipulation stands to manipulation roughly as bald-faced lying stands to

lying—as we will see below, the latter us a special case of the former. In cases of bald-faced
lying, it is common knowledge that the speaker is being uncooperative by asserting what they
know to be false. In cases of bald-faced manipulation, it is likewise common knowledge that
the speaker is speaking strategically, and it is common knowledge how they are doing so.
A Theory of Manipulative Speech · 17

(b) that R believe that in uttering U, S is being fully cooperative with


respect to G; and
M3 S intends M1 and M2 because S has goal G∗ that conflicts with G.

We can then say that a speaker speaks manipulatively if and only if she utters
words with a manipulative linguistic intention. This definition is exactly the def-
inition of strategic speech, except for the addition of [M2b].25 Clause [M2b] adds
the condition that in uttering U, the speaker is attempting to get the audience
to believe that she is being fully cooperative with respect to the salient goal of
the conversation. In getting the audience to believe that her utterance is fully
cooperative, the speaker thereby hides the fact that her utterance is undertaken
to achieve some goal that conflicts with the audience’s goal for the conversation.
Clause [M2b] introduces the idea that a key feature of manipulative speech
is deceiving one’s interlocutor concerning one’s goals for the conversation. We
can see this if we return to Antique Sale. In Antique Sale, I am aware that your
goal for the conversation is to know the true value of the antique—that is the
conversation’s salient goal. In speaking manipulatively, I speak in a way that
is aimed toward getting you to pay me an exorbinant amount for an antique—
I speak strategically. But I also attempt to get you to think that my speech is
purely, or at least primarily, non-strategic: I aim to get you to think that I share
your goal of coming to have full information concerning the antique’s value, and
speak in a way that helps you achieve this goal.26 In order to accomplish these
things, I may deliberately speak vaguely, lie, mislead, or exaggerate, all while
attempting to hide that my speech is deliberately non-cooperative, and so make
myself appear ingenuous.
Turning now to Town Hall, on the view just proposed, the politician’s speech
is manipulative when it is not fully cooperative with the audience’s goal of get-
ting information concerning his plans, but he intends, and takes steps to ensure,
that the audience thinks that he is being fully revealing. This will often be impor-
tant. If, as we suggested above, it is rational for the politician to be strategically
vague, ambiguous, or general, it is very much to his advantage if he is not recog-
25 Weaker formulations of [M2b] are possible. Rather than all-out beliefs, it is plausible that
audiences merely have credences concerning speakers’ goals and the strategies they deploy.
Thus we might phrase [M2b] instead as the intention that the audience have a sufficiently high
confidence that the speaker is being fully cooperative, where what qualifies as sufficiently high
will depend on the audience or receiver’s utility function.
26 What is the salient goal of conversation when an audience who is being manipulated

contributes to a conversation, and so becomes the speaker? The answer is: it is undefined. Since
the audience is deceived about the speaker’s goal—and thinks it aligns with their own—there
is no salient goal. Likewise, Cooperativity will be undefined, since it is defined in terms of the
salient goal. But this is as it should be. One cannot be cooperative with an interlocutor if one is
deceived about their goals. One can, however, speak optimally with respect to the goals one
believes they have.
18 · Justin D’Ambrosio

nized as being so. If he is so recognized, his speech will often not achieve its aim;
there are costs to being recognized as strategic. Thus the politician will disguise
his strategies. He can do this in any number of ways: by offering assurances
that he is being non-strategic, by speaking in ways that indicate he shares the
audience’s goals, by insisting that he is a straight-shooter, and even by speak-
ing erratically or in a down-home manner. All of these tactics can disguise the
fact that his speech is intentionally non-optimal for pursuing the salient goal of
sharing knowledge concerning the question at hand.
Now consider Cross-Exam. Above we said only that in Cross-Exam, Bronston
is being strategic by not offering a direct answer to the prosecutor’s question.
But Bronston’s response is also manipulative. Why? Clearly it is advantageous
for Bronston to hide that his speech is strategic. If Bronston can convince the
prosecutor that he is being fully cooperative, then the prosecutor will take his
statement concerning the company to implicate a negative answer to the pros-
ecutor’s question, as opposed to merely being an intentionally irrelevant claim
made to misdirect.27 This is important for Bronston. Without the assumption
that Bronston is being cooperative, the prosecutor will treat his response as ir-
relevant, and respond by saying “You haven’t answered the question.” Thus, by
hiding his non-cooperativity, and trying to get the prosecutor to think that he is
furthering the salient goal, Bronston aims first to get the prosecutor to interpret
his response as implicating a negative answer, and then, if things work out for
him, to believe what he has implicated.
In each of these cases, the speaker intends the audience to believe that she
shares their goal for the conversation, and in so doing she tries to hide her ulte-
rior goal. Against this background of deception, the speaker speaks in a way that
aims to further her ulterior goal. In Cross-Exam, Bronston attempts to further
his goal by getting the prosecutor to calculate a false implicature. But not all in-
stances of a speaker being covertly uncooperative trigger a process of pragmatic
repair. Whether an audience calculates an implicature will depend on whether
the speaker is obviously or subtly non-cooperative with what the audience takes
the speaker’s goal to be. It is plausible that the speakers in Town Hall and An-
tique Sale are subtly non-cooperative; they attempt to further their ulterior goals
without an obvious violation of a maxim. Grice [1989a] himself points out this
form of manipulative speech when he says that a speaker may “quietly and un-
ostentatiously violate a maxim; if so, in some cases he will be liable to mislead”
[Grice, 1989a, p. 30].
To illustrate, suppose that in Town Hall, the politician says something am-
27 Asher
[2012], Asher and Lascarides [2013] discuss the case of implicatures in strategic
contexts. They introduce a notion called “safety” that specifies when it is reasonable for an
audience to take a speaker to have implicated something. When the audience’s assessment of
the speaker’s cooperavity is high enough, it will be safe to calculate an implicature.
A Theory of Manipulative Speech · 19

biguous. Against the background of presumed cooperativity, the audience may


then interpret this charitably, as having been intended to convey the most plausi-
ble of the available interpretations. Alternatively, if the politician says something
that is deliberately irrelevant, the audience—presuming cooperativity, and hav-
ing some measure of uncertainty about what is relevant—may take what he says
to be relevant after all, and so be distracted from a key issue. In Antique Sale, the
seller may intentionally provide too much information to the buyer—deliberately
violating the maxim of Quantity. This strategy may simultaneously further the
seller’s goal by obscuring the facts about the antique from the buyer, while also
hiding his goal by making him seem forthright, and so cooperative. Or the seller
may use technical terms to describe the antique, knowing that the buyer will not
understand those terms, and so will be forced to defer to him. But such terms
may simultaneously give him the illusion of precision, and thus of cooperativity,
again helping his strategy go undetected.
The moral is that manipulative speech involves an intentional, coordinated
violation of the norms of Cooperativity and Publicity. A manipulative speaker
violates Cooperativity in deliberately speaking in a way that conflicts with the
salient goal of the conversation. Often, this will mean that the speaker speaks in
such a way so as to avoid making a contribution that best answers the question
at hand. But the manipulative speaker likewise violates Publicity in attempting
to hide this non-cooperativity—hide her strategic choice of speech—from her
audience. The speaker has an ulterior goal G ∗ that guides her action, but she
takes steps to ensure that the audience thinks that her interests are aligned with
theirs, and so that she is being fully cooperative with the audience’s goal for the
conversation, G.
The resulting picture is one on which manipulation is related to the old
rhetorical device of anti-rhetoric. Anti-rhetoric is a form of rhetoric, but one that
involves the implicit or explicit assurance that one’s speech is uncalculated, or
non-rhetorical.28 Marc Antony, above, prefaces one of the most successful de-
ployments of political rhetoric in all of history or literature with an assurance
that he has no skill for rhetoric—a clear case of anti-rhetoric. Likewise with Don-
ald Trump, whose apparent uncalculatedness has made many susceptible to his
influence. It is commonplace to praise Trump for his habit of “shooting from
the hip”, for being uncalculated, and for not caring about the typical ways in
which politicians craft their speeches. But if my theory of manipulation is cor-
rect, the appearance of being uncalculated is in fact an essential part of the ability
to manipulate. This appearance is an implicit assurance that Trump’s goals for
a particular interchange are simply to share knowledge, say what he thinks, and
not to craft speech that is oriented toward influencing his audience in a partic-
ular way. Moreover, this implicit assurance is accentuated by features like his
28 For an account of anti-rhetoric, see Leith [2011, 2014].
20 · Justin D’Ambrosio

manner, his accent, his colloquial diction, and even his offensive missteps. Each
of these mannerisms contributes to the idea that Trump’s speech is nonstrategic.
And Trump isn’t the only offender—his performance is just the most obvious.

4. The Mechanisms of Manipulative Speech

Above I offered a definition of manipulative speech in terms of a manipulative


speaker’s intentions. But a definition is not yet a theory. We need to know how
speakers manage to use language to manipulate audiences; we need to know the
mechanism of linguistic manipulation. What we know so far is that a speaker suc-
cessfully manipulates an audience when her speech act has some perlocutionary
effect in virtue of being covertly strategic—that is, in virtue of a hidden violation
of cooperativity. In such cases, the audience will take the speaker to have been
cooperative when she in fact was not, and interpret her speech accordingly.
The ability to deceive audiences in this way depends on the audience’s as-
sessment of how cooperative a speaker is. Call this assessment of a speaker’s
cooperativity an audience’s degree of linguistic trust. Linguistic trust is an audi-
ence’s measure of the degree to which a speaker’s goals in speaking conflict with
their own goals for the conversation. This assessment will change depending on
the context. In political contexts, audiences may well be more wary of speakers,
and so have a low degree of linguistic trust, because they know that politicians
tend to speak strategically. In the example of our political town hall, anyone but
the most naive audience-member will know that the politician has goals that con-
flict with the audience’s goal of gaining knowledge concerning the politician’s
plans. By contrast, in conversations with friends, interlocutors will presume a
high degree of cooperativity; friends will rarely have interests that conflict with
being optimally cooperative with their interlocutor’s goal. Thus, among friends,
there will be a high degree of linguistic trust.
Linguistic trust is not just trust that a speaker is speaking truthfully. Rather,
it is trust that a speaker is contributing optimally to an audience’s conversa-
tional goal—trust that a speaker intends something precise, that they are be-
ing as informative as they should be, that they are being relevant, not obfus-
cating, and not working toward a ulterior, conflicting goal. The key feature
of a good manipulator is that she knows how to increase an audience’s assess-
ment of her cooperativity—she knows how to make herself appear linguistically
trustworthy—and knows how to exploit this trust. There are countless ways of
doing this, and I cannot hope to do justice to all the tactics that manipulative
speakers use here. But there are at least three approaches that manipulators can
take to making sure audiences trust that they are being cooperative.
The first is to exploit background presumptions. Certain contexts bring with
them the presumption that speakers in these contexts are speaking cooperatively.
A Theory of Manipulative Speech · 21

It is plausible that audiences have a complex system of default presumptions


about how cooperative speakers are in a variety of different environments. The
manipulative speaker can recognize and exploit such contexts and the presump-
tions operative in them. Griceans themselves often claim that interpreters are
licensed to presume that speakers are being cooperative. This may often be the
case; cooperativity may be a justified default presumption. But such a presump-
tion can be exploited. A manipulative speaker can take advantage of being seen
as cooperative in order to achieve their ends. If an audience trusts that a speaker
is cooperative, they are more likely to accept what a speaker says, and so act in
ways that further the speaker’s goals.
The second tactic is to speak in ways that foster linguistic trust. There are
countless ways of doing this, many of which are specific to the particular goal
the speaker wishes to achieve. But by way of example, speakers can speak
colloquially—speak in a way that is “folksy”—to give the impression that they
are trustworthy; they make use of technical terms to give themselves an air of pre-
cision and authority; they can offer assurances that they are not being strategic—
as Marc Antony does before his speech; they can consistently be cooperative so as
to make their next manipulative speech act more effective; or they can construct a
persona—through various linguistic or nonlinguistic means—that speakers take
to be non-manipulative.29 Much of the success of political manipulation rests of
the cultivation of guileless, ingenuous personas.
The final tactic that speakers can use to foster linguistic trust is to exploit
power differentials between them and the audience, including their perceived
authority in a particular area. Consider the case of a scientific expert speaking
to a non-specialist audience. Audiences will trust him more in virtue of his
standing as an authority—they will take him to be fully cooperative at answering
the question at hand, even if they cannot recognize how, or even if they take
what he says to be obscure. Thus, just as we defer to experts regarding truth, we
will also defer to experts regarding the other features of cooperativity—we will
take their speech acts to be relevant, optimally informative, precise, as well as
being true and backed by reasons. Of course, these three strategies do not come
close to exhausting the various ways speakers can manipulate, and this aspect of
manipulation may be its least systematic—as Plato says in the Gorgias, rhetoric
is not a science. But these strategies at least provide examples of the means by
which manipulative speakers foster linguistic trust.
How is linguistic trust related to the notion of trust discussed extensively
by social epistemologists?30 Here I cannot hope to address this issue fully. For
29 See Henderson and McCready [2018, 2019] for an account of dogwhistles that builds on
the idea of personas.
30 See, for instance, Goldberg [2012, 2013], Frost-Arnold [2014, 2016], Hawley [2014, 2019],

among others. One important question that carries over from the literature on trust is whether
22 · Justin D’Ambrosio

my purposes it suffices to point out that just as speaking truly and with good
reason is one of many components of cooperativity, trusting that a speaker is
speaking truly and with good reason is one of many components of linguistic
trust. Further, just as the other components of cooperativity are roughly captured
by the Gricean maxims, there are components of linguistic trust corresponding
to each of the Gricean maxims. Linguistic trust is thus a broad notion of trust
that plausibly subsumes issues of trust related to truth and justification as special
cases.

5. Consequences for the Philosophy of Language

The definition above has a range of important consequences for the philosophy of
language. Perhaps the most important consequence is that it provides us with a
simple formula that makes clear predictions about which forms of speech qualify
as manipulative. The formula proceeds as follows. First, we consider all of the
forms of strategic speech. Here, since Grice’s maxims provide a rough guide to
what qualifies as cooperative speech, the maxims likewise serve as a rough guide
to the various forms that strategic speech can take. The definition then makes
the following prediction: any way of deliberately violating a Gricean maxim in
service of a hidden goal that conflicts with the salient goal of the conversation
will be an instance of manipulative speech.

5.1. Lying
This definition correctly classifies lying as a form of manipulative speech. We
can define lying as a form of speech in which a speaker deliberately asserts some-
thing concerning the topic of discussion whose content she knows to be false.31
Lying then meets our definition of manipulative speech. Why? The key reason is
that in asserting something that she knows to be false, the speaker has violated
the norm of Cooperativity. One of Grice’s maxims is the maxim of Quality, which
exhorts us to be truthful. But in standard cases of lying, a speaker intends for
their assertion to be accepted as true, and so intends for their assertion to be seen
as cooperative—they intend that their interlocutor believe that in making that as-
sertion, they are being fully cooperative with the salient goal of the conversation.
Thus, the speaker has a manipulative linguistic intention, and so lying is a form
of manipulative speech.
linguistic trust is rational. This is an important question for the Gricean: is the presumption of
Cooperativity always rational? Here I will remain neutral.
31 For discussion of lying, see Fallis [2009], Saul [2012], and Stokke [2013], among others.
A Theory of Manipulative Speech · 23

5.2. Misleading
Next, consider an oft-discussed case of misleading through implicature, similar
to the case of Bronston and the prosecutor above. Justin and Janet are dating, but
Justin, being the jealous type, is concerned that Janet is seeing her ex, Valentino,
and so asks about what she’s been doing:

(2) a. Justin: Have you been seeing Valentino this past week?
b. Janet: Valentino has mononucleosis. [Asher and Lascarides, 2013]

If Justin trusts that Janet is being cooperative, Justin will calculate the implicature
that she hasn’t been seeing Valentino. But if Janet is working toward a conflicting
goal—namely, hiding that she has been seeing Valentino—then Justin shouldn’t
calculate the implicature—his trust is unwarranted, and he should treat her re-
sponse as irrelevant. If Janet wants Justin to calculate the implicature, as opposed
to treating her response as irrelevant, it is rational for her to attempt to appear
cooperative. Thus, given Janet’s goal, it is rational for her to speak in a way
that conflicts with Justin’s goal of coming to know whether she has been seeing
Valentino, while also hiding that her speech is not fully cooperative, in order to
get Justin to calculate and accept a false implicature. In doing so, Janet meets the
conditions laid out in our definition of manipulative speech above. Misleading
is a form of manipulation.

5.3. Dogwhistling
In her important discussion of dogwhistles, Saul [2018] classifies dogwhistles as
a key form of political manipulation. The definition above yields the same result,
at least concerning what Saul calls covert dogwhistles. To see this, consider the
following example:

(3) “Yet there’s power, wonder-working power, in the goodness and idealism
and faith of the American people.” –George W. Bush

Here, Bush uses a covert dogwhistle to signal to his Evangelical constituency that
he is intending to endorse fundamentalist Christian values. But his use of this
dogwhistle is clearly strategic, and clearly covert. There are many different com-
peting accounts of dogwhistles, but for our purposes we can treat dogwhistles
as utterances that admit of two or more interpretations, one of which is intended
to convey something distinctive to a select subset of an audience.32
32 Thus, I here remain neutral on whether the two available contents are semantically encoded

or merely conveyed. Stanley [2015] claims that dogwhistles involve not-at-issue content that
is semantically encoded. By contrast, Khoo [2017] treats code words as conveying different
contents through a pragmatic mechanism. Henderson and McCready [2018] do likewise.
24 · Justin D’Ambrosio

In crafting an utterance that he knows can be interpreted in two distinct


ways, Bush deliberately violates the principle of Cooperativity—he speaks am-
biguously, or at least obscurely. But as we saw above, he likewise intends for
the majority of the audience to believe that he is not speaking in this way—that
he is being fully cooperative. This is why he would, if pressed, deny that his
statement was anything other than fully forthright. Plausible deniability is the
ability to plausibly insist that one has been fully cooperative. Thus, Bush speaks
manipulatively: he speaks strategically, while intending (at least the majority of)
his audience to believe that in speaking as he did, he was being fully cooperative
with their goals.

5.4. Manipulative Underspecification


A novel form of manipulative speech that emerges as a result of our discussion
is manipulative underspecification, or what I have elsewhere called “pied pip-
ing”. Manipulative underspecification is closely related to strategic ambiguity
and vagueness; in fact, all three phenomena share the same basic mechanism.
Consider the following example:

(4) “As your president, I will do everything in my power to protect our


LGBTQ citizens from the violence and oppression of a hateful foreign ide-
ology.” –Donald Trump

Here, we do not know exactly what the quantificational expression “everything


in my power” ranges over; this is a case of what Jeffrey King [2018] calls felicitous
underspecification. In cases of felicitous underspecification, context does not fix a
particular proposition that is expressed by the utterance. Rather, there may be a
range of propositions that are candidates for being exactly what Donald Trump
meant. Different propositions will draw different bounds on the things that are
in Trump’s power that he intends to do for the LGBTQ community. In standard
cases of felicitous underspecification, speakers and their interlocutors know, for
the purposes of the conversation, just how precise utterances must be to remain
felicitous, and underspecification will not affect understanding. Ordinary con-
versation does not require full specificity.
But when the interests of speaker and audience conflict, this kind of under-
specification can be used manipulatively. In this case, Donald Trump does not
employ the same standards for precision or specification that his audience does.
Rather, he tries to get his audience to believe that he is being fully cooperative,
and so believe that his goals for the conversation are aligned with their own. In
so doing, he hides the fact that he has deliberately underspecified any particular
propositional content, in the hope that, against the presumption of cooperativity,
the audience will take him to mean something specific. In doing so, he hopes to
A Theory of Manipulative Speech · 25

avoid the costs associated with being detected as strategic, and so make it more
likely that his are accepted by the audience. This form of manipulative speech
that is extremely common in political speech.
But there are many examples of manipulative underspecification outside of
politics. Horoscope writers intend their audiences to believe that they are making
precise, substantive predictions, when in fact they are using only vague language
that rules almost nothing out. The same is true for fortune-tellers, who aim to
get their clients to believe that they have made specific predictions—and so have
been fully cooperative—when in fact they have covertly been as vague as possi-
ble, couching their predictions in terminology that suggests specificity. The same
is often true for many forms of prediction: in sports, economics, the weather, and
even medicine.

5.5. Manipulative Speech as a Natural Kind


Finally, the formula above rightly predicts that a range of further forms of speech
are manipulative. First, consider what are sometimes called “backdoor speech
acts”. Backdoor speech acts are speech acts that add contents to the common
ground through means other than assertion.33 Standardly, such speech acts are
taken to add not-at-issue contents to the common ground while preventing an
audience from adequately evaluating or attending to those contents. In other
words, such speech acts add to the common ground covertly, in virtue of the
speaker pursuing a hidden goal. But in paradigmatic backdoor speech acts, this
hidden goal will also conflict with the audience’s goal for the conversation: such
speech acts can be undertaken with a wide range of nefarious aims, including
subordinating one’s conversational partner and legitimating preexisting unjust
norms. Thus, such speech acts are also strategic, and so my definition classifies
them as manipulative.
Further, consider misdirection, obfuscation, stupefying, and bullshitting. In
each case, a speaker violates one of the Gricean maxims. In misdirection, a
speaker violates the maxim of relation. In obfuscation, it is the maxim of quan-
tity. In stupefying, it is plausibly the maxim of manner.34 In bullshitting, it is
the maxim of quality.35 Likewise, in each case, if this violation is intended to
go undetected, the speech will be manipulative. And such violations often are
intended to go undetected—there are costs to being recognized as engaging in
any of these speech acts. Thus, each is an instance of manipulative speech.
This simple definition unifies a variety of speech acts that we know are
33 See Stanley [2015], Langton [2018], and McGowan [2019] for discussion.
34 Stupefying is a speech act in which a speaker attempts to get an audience to accept an
assertion without understanding its content. Thus, stupefying is something like the front-door
counterpart of the speech acts discussed above. For an account of stupefying, see Deigan [2020].
35 For an account of bullshitting, see Frankfurt [2005] and Fallis and Stokke [2017].
26 · Justin D’Ambrosio

manipulative, and makes precise predictions about new forms of manipulative


speech. This suggests that the definition picks out a natural kind. In doing so,
the definition at least partially answers the question posed by Beaver and Stanley
[2019], who ask what criteria there might be that unify the various forms of ne-
farious political speech. On my view, many of the forms of speech they consider,
as well as a range of other speech acts that have been studied independently, are
unified in being covertly strategic. Thus, covertly strategic speech acts form an
important category of study for the philosophy of language.

6. Propaganda

Turning now to political speech, the theory developed above can be used to for-
mulate a novel, attractive account of the nature of propaganda and its mecha-
nisms. First, note that it is standard to define propaganda in terms of manipula-
tion. Consider, for example, the following definitions:

Propaganda is the management of collective attitudes by the manipu-


lation of significant symbols. [Lasswell, 1927]

Propaganda is the deliberate, systematic attempt to shape percep-


tions, manipulate cognitions, and direct behavior to achieve a re-
sponse that furthers the desired intent of the propagandist.
[Jowett and O’Donnell, 1986]

[Propaganda is m]ass suggestion or influence through the manipula-


tion of symbols and the psychology of the individual.
[Pratkanis and Aronson, 1992]

[Propaganda is] the organized attempt through communication to af-


fect belief or action or inculcate attitudes in a large audience in ways
that circumvent or suppress an individual’s adequately informed, ra-
tional, reflective judgment. [Marlin, 2013]

Propaganda is manipulation of the rational will to close off debate.


[Stanley, 2015]

While these definitions differ in important ways, each of them claims, in as many
words, that propaganda operates by means of manipulation, and plausibly, the
primary medium through which propaganda operates is through speech. Thus,
manipulative speech is at least a core component of propaganda. But what makes
propaganda distinctive?
A Theory of Manipulative Speech · 27

One key way in which propaganda differs from other forms of manipulative
speech is that propaganda does not seem to occur as part of a conversation. This
is due to the fact, also apparent in the definitions above, that propaganda is
issued from a single source, but aimed at a mass audience. This points us to
an initial proposal about the connection between propaganda and manipulative
speech:

Propaganda, v. 1: Propaganda is manipulative speech directed toward a mass


audience.

This proposal, while appealing, is too narrow. While speech is perhaps the pri-
mary medium through which propaganda operates, it is surely not the only
one. There can be propagandistic signs, music, art, film, and architecture—
propaganda can operate through a wide range of media. Moreover, it would
be useful to know what changes, if any, must be made to our account of manip-
ulative speech once such speech is directed at a mass audience.
In order to formluate a better proposal, we can here follow Stanley [2015] in
introducing the idea of public discourse. Public discourse can be thought of as a
generalization of conversation to the case where a speaker speaks to a mass au-
dience. Public discourse retains many, but not all of conversation’s key features.
Contributors to public discourse will likewise have goals, and such goals—along
with the ways they are pursued—can be hidden. Members of audiences, likewise,
will have goals, and those may align more or less fully with those of the speaker.
Each member of the audience will likewise have an assessment of how fully a
speaker’s goals align with their own: they will have some degree of linguistic
trust.
But there are key differences between ordinary conversation and public dis-
course. First, public discourse, unlike conversation, is often one-sided. Audi-
ences in public discourse often do not have the chance to respond to speakers.
Further, in public discourse, the goals and background beliefs of mass audiences
are not uniform, and members of the audience will not all have the same degree
of linguistic trust in the speaker. This variation will lead to different strategic
choices on the part of the manipulative contributors to public discourse, and dif-
ferent ways of cultivating linguistic trust. Speakers will have to operate with a
summary assessment of the goals of their audience, or perhaps with multiple
assessments of the goals of different parts of the audience. Finally, unlike in
conversation, contributions to public discourse need not be linguistic. One can
contribute to public discourse by making a movie, composing a piece of music,
hanging a banner, or in any number of other ways.
With the idea of a contribution to public discourse in hand, we can offer the
following definition of propaganda:

Propaganda, v. 2: Propaganda is a manipulative contribution to public discourse.


28 · Justin D’Ambrosio

When combined with the theory of manipulative speech developed above, this
proposal significantly advances our understanding of propaganda. The account
developed above provides us with a precise characterization of what it is for a
contribution to conversation to be manipulative: a contribution to a conversa-
tion is manipulative if it is covertly strategic. But this definition can be straight-
forwardly extended to contributions to public discourse. Just as in the case of
manipulative speech in conversation, propagandists speak so as to pursue goals
that they know conflict with the goals of the public, all while trying to make their
speech appear cooperative with the public’s goals.
Of course, in light of the differences between conversation and public dis-
course, the speaker will make use of a more complicated assessment of the au-
dience in choosing her manipulative strategies. But the basic structure of ma-
nipulative speech and propaganda is the same. Propagandists will do whatever
they can to convince public audiences that they share the public’s goals, and so
that their contributions to public discourse are intended to further those goals.
The propagandist will then be successful to the degree that the audience inter-
prets those contributions in a way that brings about the propagandist’s intended
perlocutionary effect.
Like manipulative speech in conversation, propaganda operates by exploit-
ing the public’s trust that a contribution is made against a background of aligned
goals. Accordingly, many of the same mechanisms of cultivating and exploiting
linguistic trust that were effective in the case of conversation will be effective
here. Propagandists can pose as non-strategic by speaking in ways that make
them appear ingenuous, by appearing to endorse the goals that they know the
audience holds, or by speaking in ways that foster linguistic trust. Thus, a key
element of propaganda is deceiving a public audience about one’s goals. Pro-
paganda is a covertly non-cooperative contribution to public discourse aimed at
bringing about a particular—often political—perlocutionary effect.
This view of propaganda subsumes a wide range of instances of propaganda
discussed in the literature, and improves on prior definitions. In a recent discus-
sion, Stanley [2015] distinguishes between several different forms of propaganda:
supporting propaganda, undermining propaganda, and demagoguery. Consider
first his definition of undermining propaganda:

Undermining Propaganda: A contribution to public discourse that is


presented as an embodiment of certain ideals, yet is of a kind that
tends to erode those very ideals. [Stanley, 2015, p. 54]

This proposal is closely related to the definition I have offered. However, my


definition is superior in two key ways. First, if propaganda is going to achieve
the effects at which it is aimed, it must appear to be directed toward goals or
ideals that the public holds; it cannot merely be directed toward some ideal or
A Theory of Manipulative Speech · 29

other. If the audience does not share the goals or ideals that the contribution
to public discourse appears to advance, then the propaganda cannot hope bring
about its perlocutionary aims.
Stanley appears to recognize this, and goes on to define a form of propaganda
that he calls “undermining demagoguery”:

Undermining Demagoguery: A contribution to public discourse that


is presented as an embodiment of a worthy political, economic, or
rational ideal, but is in the service of a goal that tends to undermine
that very ideal. [Stanley, 2015, p. 69]

Undermining demagoguery involves a propagandist making a contribution to


public discourse that poses as directed toward worthy goals or ideals, but which
undermines those very goals. But Stanley then goes on to admit that it is hard
to determine which goals are worthy and which are not. But the definition I
have provided does not face this problem, because determining which goals are
worthy goals is not a component of the theory of manipulative speech. On my
view, propaganda is effective not because it appears to pursue ideals that are
objectively worthy, but rather because it pursues ideals largely endorsed by the
public. These goals may be worthy, but if they are, this fact is incidental. A
contribution to public discourse can be propagandistic even when the ideals that
it appears to further are abhorrent.
This brings out another key way in which my proposal differs from Stanley’s—
a way in which it is preferable to his. On Stanley’s view, propaganda is effective
because it exploits flawed ideological beliefs. But on my view, no such beliefs
are required for propaganda to be effective. Instead, propaganda exploits the
audience’s belief that the speaker has goals that align with their own—it exploits
audience’s linguistic trust—and it also may exploit their uncertainty about what
constitutes an optimal contribution to their goals. Propaganda is thus effective
to the extent that the speaker can convince the audience that his contributions
to public discourse are in fact consonant with and instrumental to their goals
and ideals, and get the audience to act on that conviction. Thus, propaganda
operates by appearing to further goals that the public already holds, all while
working toward some conflicting goal held but hidden by the propagandist.

7. Conclusion

The above theory provides us with a framework in which to study the varieties
of manipulative speech. But it also leaves much work still to be done. First,
there are important questions concerning how to model covertly strategic speech.
Frameworks that model contexts as bodies of common knowledge, or as shared
conversational scoreboards, are not adequate to the task, because in manipulative
30 · Justin D’Ambrosio

speech, speaker and audience have different conceptions of how the conversa-
tion is going. To account for this, we need scoreboards that can model deception
concerning a speaker’s goals and other facts about the context relevant to lin-
guistic interpretation. Further, while I have here discussed a few speech acts
predicted to be manipulative by this definition, there are many forms of manip-
ulative speech still to be studied.
Perhaps more importantly, the theory provides us with the beginnings of
a toolkit for detecting—and so resisting—manipulative speech in politics and
public life. Manipulative speakers exploit our presumptions of cooperativity
to deceive us about their goals, and in so doing make it more likely that we
accept their often nefarious claims. By investigating the nature of linguistic trust,
examining our default presumptions of cooperativity, and paying close attention
to the goals of public figures and the language they use to pursue them, we can
cast light on strategic speech intended to be covert. In so doing, we will see that
many figures in public life do not speak in an honest, straightforward manner.
Rather, like Marc Antony, they do not speak right on, and stir our blood only by
convincing us that they do.

References

Enriqueta Aragonès and Zvika Neeman. Strategic ambiguity in electoral compe-


tition. Journal of Theoretical Politics, 12(2):183–204, 2000.
Enriqueta Aragonès and Andrew Postlewaite. Ambiguity in election games. Re-
view of Economic Design, 7:233–255, 2002.
Nicholas Asher. The non cooperative basis of implicature. In D. Béchet and
A. Dikovsky, editors, Logical Aspects of Computational Linguistics, volume vol.
7351 of Lecture Notes in Computer Science. LACL 2012, Springer Berlin Heidel-
berg, 2012.
Nicholas Asher and Alex Lascarides. Logics of Conversation. Cambridge Univer-
sity Press, 2003.
Nicholas Asher and Alex Lascarides. Strategic conversation. Semantics and Prag-
matics, 6(2):1–62, 2013.
Nicholas Asher and Soumya Paul. Strategic conversations under imperfect infor-
mation: Epistemic message exchange games. Journal of Logic, Language, and
Information, 27:343–385, 2018.
Nicholas Asher, Soumya Paul, and Antoine Venant. Messagee exchange games
in strategic contexts. Journal of Philosophical Logic, 46(4):355–404, 2017.
Kent Bach. The top 10 misconceptions about implicature. In Betty Birner and Gre-
gory Ward, editors, Drawing the Boundaries of Meaning: Neo-Gricean studies in
A Theory of Manipulative Speech · 31

semantics and pragmatics in honor of Laurence R. Horn, Studies in the Language


Companion, pages 21–30. John Benjamins Publishing Company, 2005.
Kent Bach. Loaded words: On the semantics and pragmatics of slurs. In David
Sosa, editor, Bad Words: Philosophical Perspectives on Slurs, Engaging Philoso-
phy, chapter 3, pages 60–76. Oxford University Press, 2018.
Kent Bach and Robert M. Harnish. Linguistic Communication and Speech Acts. The
MIT Press, Cambridge, MA and London, England, 1979.
Anne Barnhill. What is manipulation? In Christian Coons and Michael Weber, ed-
itors, Manipulation: Theory and Practice, pages 51–72. Oxford University Press,
Oxford and New York, 2014.
David Beaver and Jason Stanley. Toward a non-ideal philosophy of language.
Graduate Faculty Philosophy Journal, 39(2), 2019.
Edward L. Bernays. Propaganda. Routledge, 1928.
Andreas Blume and Oliver Board. Intentional vagueness. Erkenntnis, 79:855–899,
2014.
Reinhard Blutner. Some aspects of optimality theory in natural language inter-
petation. Journal of Semantics, 17:189–216, 2000.
Reinhard Blutner and Henk Zeevat. Introduction. In Reinhard Blutner and Henk
Zeevat, editors, Optimality Theory and Pragmatics, Palgrave Studies in Prag-
matics, Language, and Cognition. Palgrave Macmillan, 2004.
Reinhard Blutner, Helen de Hoop, and Petra Hendriks. Optimal Communication.
CSLI Publications, Stanford, CA, 2006.
Renée Jorgensen Bolinger. Th pragmatics of slurs. Noûs, 51(3):439–462, Septem-
ber 2017.
Elisabeth Camp. Insinuation, common ground, and the conversational record. In
Matt Moss Daniel Fogal, Daniel W. Harris, editor, New Work on Speech Acts,
chapter 2. Oxford University Press, 2018.
Herman Cappelen and Josh Dever. Bad Language. Contemporary Introductions
to Philosophy of Language. Oxford University Press, Oxford and New York,
2019.
Noam Chomsky. Media Control: The Spectacular Achievements of Propaganda. Open
Media Pamphlet Series. Seven Stories Press, 1991.
Shlomo Cohen. Manipulation and deception. Australasian Journal of Philosophy,
96(3):483–497, October 2018.
Vincent P. Crawford and Joel Sobel. Strategic information transmission. Econo-
metrica, 50(6), 1982.
Mike Deigan. Stupefying. https://mikedeigan.com/pdfs/deigan-stupefying-
draft.pdf, 2020.
Eric S. Dickson and Kenneth Scheve. Social identity, political speech, and elec-
toral competition. Journal of Theoretical Politics, 18(1):5–39, January 2006.
Don Fallis. What is lying? Journal of Philosophy, 106(1):29–56, 2009.
32 · Justin D’Ambrosio

Don Fallis and Andreas Stokke. Bullshitting, lying, and indifference toward truth.
Ergo, 4, 2017.
Joseph Farrell and Matthew Rabin. Cheap talk. Journal of Economic Perspectives,
10(3):103–118, Summer 1996.
Yossi Feinberg. Strategic communication. Proceedings of TARK XIII, pages 1–11,
July 2011.
Michael Franke. From Signal to Act: Game Theory in Pragmatics. PhD thesis, Insti-
tute for Logic, Language, and Computation, University of Amsterdam, Ams-
terdam, the Netherlands, 2009.
Michael Franke and Robert van Rooij. Strategies of persuasion, manipulation,
and propaganda: Psychological and social aspects. In Sujata Gosh Johan van
Benthem and Rineke Verbrugge, editors, Models of Strategic Reasoning: Logics,
Games, and Communities, chapter 8. Springer, Heidelberg, 2015.
Harry G. Frankfurt. On Bullshit. Princeton University Press, Princeton, NJ, 2005.
Elizabeth Fricker. Stating and insinuating. The Aristotelian Society Supplementary
Volume, 86(1):61–94, 2012.
Katherine Frost-Arnold. The cognitive attitude of rational trust. Synthese, 191(9),
2014.
Katherine Frost-Arnold. Social media, trust, and the epistemology of prejudice.
Social Epistemology, 30(5-6):513–531, 2016.
Sanford Goldberg. Self-trust and extended trust: A reliabilist account. Res Philo-
sophica, 90(2):277–292, 2013.
Sanford C. Goldberg. Relying on Others: An Essay in Epistemology. Oxford Uni-
versity Press, Oxford, 2012.
Robert E. Goodin. Manipulatory Politics. Yale University Press, Yale and London,
1982.
Paul Grice. Studies in the Way of Words. Harvard University Press, 1989a.
Paul Grice. Logic and conversation. In Studies in the Way of Words Grice [1989a],
pages 22–40.
Jason Hanna. Libertarian paternalism, manipulation, and the shaping of prefer-
ences. Social Theory and Practice, 41(4):618–643, 2015.
Katherine Hawley. Trust, distrust and commitment. Noûs, 48(1):1–20, 2014.
Katherine Hawley. How to be Trustworthy. Oxford University Press, Oxford, 2019.
Robert Henderson and Elin McCready. How dogwhistles work. Proceedings of
LENLS, 2018.
Robert Henderson and Elin McCready. Dog whistles and the at-issue/non-at-
issue distinction. In Daniel Gutzman and Katharina Turgay, editors, Secondary
Content: The Semantics and Pragmatics of Side-Issues, volume 37 of Current Re-
search in the Semantics/Pragmatics Interface, pages 222–245. Brill, Leiden, The
Netherlands, 2019.
Edward S. Herman and Noam Chomsky. Manufacturing Consent. Pantheon
A Theory of Manipulative Speech · 33

Books, New York, 1988.


Laurence Horn. Toward a new taxonomy for pragmatic inference: Q-based and
r-based implicature. In Deborah Schiffrin, editor, Meaning, form, and use in
context: linguistic applications, Georgetown University Round Table on Lan-
guages and Linguistics, pages 11–42. Georgetown University Press, Washing-
ton, DC, 1984.
Garth Jowett and Victoria O’Donnell. Propaganda and Persuasion. SAGE, London
and New Delhi, 1st edition, 1986.
Emir Kamenica and Matthew Gentzkow. Bayesian persuasion. American Eco-
nomic Review, 101(6):2590–2615, October 2011.
Vance Kasten. Manipulation and teaching. Philosophy of Education, 14(1):53–62,
1980.
Justin Khoo. Code words in political discourse. Philosphical Topics, 45(2):33–64,
2017.
Jeffrey C. King. Strong contextual felicity and felicitous underspecification. Phi-
losophy and Phenomenological Research, XCVII(3):631–657, November 2018.
Michael Kligman and Charles M. Culver. An analysis of interpersonal manipula-
tion. Journal of Medicine and Philosophy, 17(2):173–197, 1992.
Rae Langton. Blocking as counterspeech. In Matt Moss Daniel Fogal, Daniel
W. Harris, editor, New Work on Speech Acts, chapter 6, pages 144–164. Oxford
University Press, 2018.
Harold D. Lasswell. The theory of political propaganda. The American Political
Science Review, 21(3):627–631, 1927.
James J. Lee and Stephen Pinker. Rationales for indirect speech: The theory of
the strategic speaker. Psychological Review, 117(3):785–807, 2010.
Sam Leith. You Talkin’ to Me? : Rhetoric from Aristotle to Trump and Beyond...
Profile Books Ltd, London, 2011.
Sam Leith. Anti-rhetoric can be the best rhetoric, 2014. URL https://www.ft.
com/content/fc64a1ac-8a86-11e3-9c29-00144feab7de.
Stephen C. Levinson. Pragmatics. Cambridge University Press, Cambridge, 1983.
Randal Marlin. Propaganda and the Ethics of Persuasion. Broadview Press, 2nd
edition, 2013.
Mary Kate McGowan. Just Words. Oxford University Press, Oxford and New
York, 2019.
Claudia Mills. Politics and manipulation. Social Theory and Practice, 21(1):97–112,
Spring 1995.
Robert Noggle. Manipulative actions: A conceptual and moral analysis. Ameri-
can Philosophical Quarterly, 33(1):43–55, 1996.
Geoffrey Nunberg. The social life of slurs. In Matt Moss Daniel Fogal, Daniel
W. Harris, editor, New Work on Speech Acts, chapter 10, pages 237–295. Oxford
University Press, New York, 2018.
34 · Justin D’Ambrosio

Anthony Pratkanis and Elliot Aronson. Age of Propaganda: The Everyday Use and
Abuse of Persuasion. W. H. Freeman and Co, 1992.
Anne Quaranto and Jason Stanley. Propaganda. In Justin Khoo and Rachel
Sterken, editors, Routledge Handbook of Social and Political Philosophy of Lan-
guage. Routledge, forthcoming.
Joseph Raz. The Morality of Freedom. Oxford University Press, Oxford and New
York, 1988.
Craige Roberts. Information structure in discourse: Towards an integrated for-
mal theory of pragmatics. In Jae-Hak Yoon & Andreas Kathol, editor, OSU
Working Papers in Linguistics, Vol 49: Papers in Semantics. The Ohio State Uni-
versity Department of Linguistics, 1996.
Jennifer Saul. Dog whistles, political manipulation, and the philosophy of lan-
guage. In Daniel W. Harris Daniel Fogal and Matt Moss, editors, New Work
on Speech Acts. Oxford University Press, 2018.
Jennifer M. Saul. Lying, Misleading, and What is Said: An Exploration in Philosophy
of Language and Ethics. Oxford University Press, Oxford and New York, 2012.
T.M. Scanlon. What We Owe to Each Other. Harvard University Press, Cambridge,
MA, 1998.
Lawrence M. Solan and Peter M. Tiersma. Speaking of Crime: The Language of
Criminal Justice. Chicago Series in Law and Society. University of Chicago
Press, Chicago, 2005.
Robert Stalnaker. Context. Oxford University Press, Oxford and New York, 2014.
Robert C. Stalnaker. On the representation of context. Journal of Logic, Language,
and Information, 7:3–19, 1998.
Robert C. Stalnaker. Common ground. Linguistics and Philosophy, 25:701–721,
2002.
Jason Stanley. How Propaganda Works. Princeton University Press, Princeton, NJ,
2015.
Andreas Stokke. Lying, deceiving, and misleading. Philosophy Compass, 8(4):
348–359, 2013.
Cass Sunstein. The Ethics of Influence: Understanding and Dealing With Manip-
ulative People. Parkhurst Brothers Publishers Inc., Little Rock, AR, revised
edition edition, 2016.
Zoltán Gendler Szabó. The goal of conversation. Aristotelian Society Supplemen-
tary Volume, 94(1):57–86, 2020. Manuscript, Joint Meeting of the Mind and
Aristotelian Society.
Michael Tomz and Robert P. van Houweling. The electoral implications of can-
didate ambiguity. American Political Science Review, 103(1):83–98, February
2009.
Robert van Rooij. Questioning to resolve decision problems. Linguistics and Phi-
losophy, 29:727–763, 2003.

You might also like