Professional Documents
Culture Documents
Benjamin R. Sherman (2016) There's No (Testimonial) Justice
Benjamin R. Sherman (2016) There's No (Testimonial) Justice
Benjamin R. Sherman
To cite this article: Benjamin R. Sherman (2016) There’s No (Testimonial) Justice: Why Pursuit
of a Virtue is Not the Solution to Epistemic Injustice, Social Epistemology, 30:3, 229-250, DOI:
10.1080/02691728.2015.1031852
Benjamin R. Sherman is a lecturer in Philosophy at the Boston University. Correspondence to: Benjamin
R. Sherman, Philosophy, Boston University, 745 Commonwealth Ave., Boston, MA 02215, USA. Email:
benrs@bu.edu
this by revising the credibility upwards to compensate. … The guiding ideal is to neu-
tralize any negative impact of prejudice in one’s credibility judgments by compensat-
ing upwards to reach the degree of credibility that would have been given were it not
for the prejudice. (91–2)
In cases where we cannot simply “reflate” the credibility level in the judgment, Fricker
suggests that the best course of action might be to “render our judgment more vague
and more tentative,” suspend judgment altogether, or seek more evidence (92).
I more or less agree with Fricker that it is good to try to neutralize the effects
of identity prejudices on our credibility judgments. But I have two major qualms
about Fricker’s suggestion that ideals of virtue should guide our practice. First,
there might be no such virtue as testimonial justice, even if there are ways to avoid
committing the injustices most or all of the time. Second, I will argue that, even if
there is (or could be) such a virtue, it would be a bad idea to have this virtue ideal
guide our practice as hearers, given that the very biases we are trying to avoid are
likely to influence our conception of the virtue we strive for. Given our unreliabil-
ity at evaluating others’ credibility in the first place, it is not at all clear that we
can correctly recognize reliability and compare it to our own performance. While
it would be good to become someone who is habitually and characteristically dis-
posed to be just, aiming to achieve this virtue makes us, I think, less likely to actu-
ally achieve it; if there is such a virtue, our best chance to achieve it is to assume
we will never have it, and instead develop strategies for overcoming our ongoing
susceptibility to vice.
3. Fricker’s Commitments
The most famous recent line of attack against the assumptions implicit in virtue
theory comes from situationism. Situationists argue that virtues—that is, stable
dispositions of character as they are conceived in virtue ethics—can only be real-
ized if certain assumptions about human nature are true; but those assumptions
appear to be false (Doris 1998, 2002, 2005, 2010; Harman 1999, 2000; Doris and
Stich 2007; Alfano 2013). The situationist challenge to virtue ethics has recently
been extended to virtue epistemology, arguing that virtue epistemologists also can-
not claim the kinds of virtues they discuss are realizable without making dubious
empirical assumptions (Alfano 2012, 2013, 2014; Olin and Doris 2014). Specifi-
cally, virtue theorists have traditionally assumed people have stable character traits
that are consistent across most situations, or at least that virtuous people will dis-
play virtue in all but the most extreme situations. Situationists argue, however,
that situational factors, including minor features of situations that are irrelevant to
moral or epistemic reasoning, have a much greater impact on human thought and
action than traditional virtue theory can accommodate. While situationism is hotly
contested, and many virtue theorists seek to meet the situationist challenge, Doris
and Stich argue that, given the empirical evidence, “the burden of argument has
importantly shifted: The advocate of virtue ethics can no longer simply assume
that virtue is psychologically possible” (2007, 121).
As might be expected, though, not all virtue theories are equally vulnerable to
the situationist critique. In brief, situationists argue that virtue theorists are kid-
ding themselves if they think someone disposed to be courageous—or open-
minded, or epistemically just—in one situation will be similarly disposed in all
situations. This argument attacks a deep-rooted tradition in virtue theory, but
there are branches that are willing to do away with the tradition, and accept that
we may have only dispositions-to-behave-well-in-certain-kinds-of-situations. While
Social Epistemology 235
I am not at all sure Fricker would want to adopt this line, I see no reason she
couldn’t. So situationism is not necessarily a threat to Fricker.
But there are other aspects of testimonial justice that raise questions about its
psychological possibility:
(I) Testimonial justice is presented as the solution to the problem of epistemic
injustice, and it is a virtue Fricker claims we “can, and should, aim for in
practice” (98–99).
(II) Testimonial injustice is a form of misjudgment, an intellectual mistake, not
recognized by the agent at the time it occurs (cf. 43). So, unlike most of
the familiar moral vices, someone cannot commit testimonial injustice
knowingly. As a result, testimonial justice is constituted in part by the
capacity to notice when we have made errors in credibility judgments.
(III) Rather than taking the virtue to be whatever set of faculties or strategies
produce the right sort of judgments and corrections, Fricker proposes that
epistemic justice is guided by the following ideal: “to neutralize any nega-
tive impact of prejudice in one’s credibility judgments by compensating
upwards to reach the degree of credibility that would have been given were
it not for the prejudice” (91–92). Thus, she stipulates a particular way of
combating testimonial injustice, involving at least two steps: Identify a par-
ticular unjust credibility judgment and reflate it by the right amount.
Some argue that it does not matter whether a given virtue is psychologically
possible, since virtues can serve as purely theoretical descriptions of unattainable
ideals. But (I) forecloses this line of argument in the case of testimonial justice,
since testimonial justice is supposed to be aimed for in practice. Omniscience,
infallibility and perfect rationality can be useful and interesting ideals to discuss in
theory, but, so far as I know, no one proposes that we actually try to achieve
them. Perhaps one might suggest that, even if impossible ideals cannot be attained,
it is still worth aiming for them. But this raises a further question about
psychological possibility: If we recognize something as an unattainable ideal, is it
psychologically possible to coherently aim for it?
Perhaps this puts too much weight on the term “aim”. Perhaps the idea is that
we should approximate the ideal as much as possible. Even then, it cannot be
taken for granted that the best way to deal with a problem like testimonial injus-
tice is to strive for a contrary virtue. Sometimes the perfect is the enemy of the
good—even if we are only aiming to approximate the perfect. Imagine that every-
one starts out committing testimonial injustices in 60% of cases where they evalu-
ate the credibility of a member of a marginalized group; and suppose that
everyone has two courses of action open to them: if someone tries to cultivate vir-
tue, she has a 1% chance of reducing her rate of injustice to 1%, and a 99%
chance of failing to reduce her rate of injustice at all. If she aims for a modest
improvement, she has a 99% chance of reducing her rate of injustice to 40%, and
a 1% chance of failing to reduce her rate of injustice at all. In this (admittedly
236 B. R. Sherman
artificial) example, aiming for the nearest possible approximation of virtue seems
to be, on average, a less effective way of combating testimonial injustice than giv-
ing up on the ideal and aiming for modest, but reliably achievable improvement.
Is this scenario too far-fetched to be a real worry? It might be far-fetched for
virtues like honesty and fidelity, since usually the major obstacles to becoming an
honest or faithful person are temptation or bad habits; striving for a perfect record
won’t obviously lead to worse results, in most cases, than striving for modestly
improved track record. But aspect (II) makes matters more difficult for testimonial
justice. The problems raised by (II) will be discussed in greater length in Sections
4 and 5. But for now, let me suggest a way that (II) makes it more likely that the
perfect will be the enemy of the good. We could adopt a policy of trying to notice
when prejudice affects our credibility judgments, and compensate for it; but, unless
we are very arrogant, we will probably recognize that this policy will fail some por-
tion of the time, either because we forget sometimes to ask ourselves whether we
are influenced by prejudice, or because we sometimes fail to notice the influence
of prejudice despite asking the question. Still, we might suppose (and Fricker, it
seems, must suppose) that this would be better than nothing. But (II) supposes
that people can develop a disposition to reliably notice the influence of prejudice
on their own judgments. (This need not involve supposing that anyone can be per-
fectly reliable—but Fricker’s account seems to require a high degree of reliability.)
What would be the difference between the policy of trying to detect prejudice, and
striving for the disposition to reliably detect prejudice? The former does not have
an obvious stopping point, whereas the latter does. Moreover, the stopping point
for the latter is reaching the goal of becoming someone who reliably notices their
own errors in judgment. But, from a first-person point of view, it might seem to
me that I have achieved this goal, once I have stopped noticing errors in my own
judgment. Those who are far from the goal might have as much or more evidence
of having achieved it than those who approximate it more closely.
Fricker might avoid these worries about misleading evidence if it was well-
established that some people were highly reliable at correcting their own prejudi-
cial judgments; other capacities that are sometimes regarded as epistemic virtues
(like good facial recognition, or picking up the grammar of a language) involve
reliable achievement of good judgments without conscious awareness of the evi-
dence on which they are based. In that case, while introspection might not be a
good way to determine whether someone had achieved testimonial justice, a third
party could examine whose judgment is reliable, and examine how that reliable
faculty operates and is acquired.4 This would be bad news for individuals trying to
aim for testimonial justice without the benefit of scientific findings about reliable
capacities. But at least people, as a community, could work to understand and
replicate this faculty of mistake-rectification.
But I know of no evidence establishing that anyone does possess such a capac-
ity. While Fricker suggests that there is more hope of attaining corrective testimo-
nial justice than naı̈ve testimonial justice, the reverse may be true; since most
people naively lack some prejudices, it is possible to examine what makes their
Social Epistemology 237
judgment more reliable, and how they differ from people whose credibility
judgments are influenced by prejudice. There may be general reasons to expect
naı̈ve testimonial justice to be more common than corrective testimonial justice;
the first requires simply a reliable faculty, whereas the second requires a faculty to
reliably neutralize the errors of a different, unreliable faculty. A corrective mecha-
nism of this sort seems less likely to develop, and likely to be less developed, than
the faculty it corrects for.
For some kinds of errors, critical reflection plays, or seems to play, exactly the
sort of corrective role in question. Pre-reflectively, we are prone to fall prey to the
gambler’s fallacy and the Muller-Lyer illusion, but when we reflect critically, we
can recognize our mistakes, and correct for them. Fricker seem to take reflection
to be well-suited to play the same role in correcting for identity prejudice in credi-
bility judgments (Fricker 2007, 91). Moreover, aspect (III) suggests a particular
way of correcting for prejudices, one which involves reflective critical thinking
about the initial credibility judgment. But reflective thinking about our reactions
cannot be assumed to be more reliable than pre-reflective thinking—often reflec-
tive thought is less reliable (cf. Wilson 2002; Kornblith 2010 and 2013; Alfano
2013, Chap. 6, and 2014). I will return to this point in Section 5.
humility. If you are aiming for testimonial justice, you are aiming for a state in
which you do not frequently notice yourself committing epistemic injustices with-
out rectifying them. But, of course, there are two ways to achieve this aim: You
could achieve testimonial justice, or you could stop noticing your uncorrected
testimonial injustices. The trouble is, since your own present judgments will always
seem correct to you, you will be unable to distinguish those two outcomes. Striv-
ing for the virtue of testimonial injustice is not inherently at odds with updating
your beliefs in the face of new evidence, but it involves aiming to achieve a state
in which new evidence will, by hypothesis, never or rarely require such changes to
your credibility judgments.
Still, this might sounds like an objection to virtue theory in general, not
Fricker’s virtue in particular. A mistaken idea about what is virtuous could always
lead someone astray. The GC thinks it is courageous to threaten and terrorize
defenceless people, and so valorises behavior we think is appalling, and perhaps
downright cowardly. The GC thinks the epistemically responsible thing to do is
avoid listening to those who want to change his views on race, gender, etc., and so
valorises an epistemic policy we think is viciously dogmatic and epistemically
irresponsible.
In fact, this sort of mistake is not a special problem for virtue theory at all, but
a problem for moral thinking in general. Any time we mistakenly think something
bad is good, our motivation to be moral could, in fact, lead us to be immoral. Vir-
tue theory differs from (some) other moral theories only in giving less specific
guidance about how to tell right from wrong. Becoming a utilitarian or Kantian
only helps us avoid these results if the theory we pick is correct (or at least never
demands we do anything that is, in fact, wrong.)
If striving for testimonial justice were no worse than striving for any other vir-
tue (or moral standard), the case of the GC would not be much of a problem. But
the case is at least somewhat worse for testimonial justice.
First, while it is always possible to have a false conception of a virtue (or some
other moral standard), one’s conception of testimonial justice is virtually guaran-
teed to match one’s own present state, and so reinforce complacency. Compare
with two closely related virtues: the moral virtue of justice, and the epistemic vir-
tue of intellectual responsibility. Neither of these virtues is constituted by reliably
recognizing errors. It is even possible that one might be morally just or epistemi-
cally responsible while believing one is not (at least on some theories of virtue.) It
is quite likely that most people take themselves to generally be fairly just and intel-
lectually responsible, but there is nothing incoherent about thinking oneself unjust
or irresponsible. Sometimes we treat others unjustly for selfish reasons, or do so
accidentally, but then fail to make reparations out of embarrassment or laziness.
(Of course, those with more demanding notions of justice might think that it
would take a heroic effort to be truly just, and might, in all awareness, fall well
short of that heroic standard.) And, even if most people think themselves generally
decently just, probably most people can think of others who are somewhat more
just—who are a little more careful and observant of the demands of fairness and
240 B. R. Sherman
dessert. If so, they can generally observe at least some difference between the ideal
of justice and their own behavior. Intellectual responsibility is a little different,
since, of course, people cannot think their beliefs are too far from being true with-
out falling afoul of Moore’s Paradox: It makes no sense to say “I believe that p,
but it is likely that p is false.” But people can recognize themselves as careless, and
so take the attitude that it is a matter of good luck that their beliefs are mostly
true. They can, again, recognize that others are more careful and scrupulous than
themselves. In both cases, they can see how, with some effort, they could more
closely approximate the ideal.
Testimonial justice, on the other hand, is a matter of avoiding certain kinds of
error. Apart from rare crisis moments that shake our entire worldview, we cannot
regard most of our beliefs as erroneous, and, if we can recognize anyone as having
more accurate judgments than our own, at a given time, this judgment must be
based on our appraisal of their conclusions—which means either we share many
of their beliefs, or we find their judgments persuasive upon learning about them,
and adopt those beliefs ourselves. I cannot coherently think someone has much
better judgment than me if I disagree with her about too many of her conclusions.
This brings me to a second difference between testimonial justice and other vir-
tues. While the literature on virtue ethics sometimes seems to assume the author
and reader both already know what is virtuous and correct, it does suggest at least
some resources for correcting mistaken conceptions of virtue. For one thing, we
can check whether our conceptions of virtue cohere with the opinions of our role
models—those we think of as wise, admirable, and flourishing. For another, a
virtue must be at least compatible with living a life that is both admirable and
worthwhile.
These ways of screening for mistakes are not foolproof. If the Grand Cyclops
takes the Grand Wizard as his role model for courage, justice, and intellectual
responsibility, there is only slim hope that his views will change as he strives for
his ill-conceived ideals. It does seem plausible (though not obvious) that if his ide-
als were different, he would flourish more than he will if he pursues his current
ideals. But that would mean living a different enough life that he probably has no
way to make the comparison.
But, again, pursuing the virtue of testimonial justice nearly guarantees that
these corrective mechanisms will fail, whereas they stand a chance of working for
other virtues. I cannot coherently take someone as a role model for epistemic jus-
tice if I disagree with many of her judgments.5 Here, again, I think we see a poten-
tial tension between epistemic responsibility and testimonial justice; if someone is
more epistemically responsible than me, or has a long track record of winning me
over when we disagree, then, insofar as I aspire to epistemic responsibility, I aspire
to being the sort of person who suspends judgment or revises my views in a situa-
tion like this.6 When I aspire to testimonial justice, on the other hand, I aspire to
being the sort of person that reliably judges credibility correctly, and so would not
need to make such revisions on my credibility judgments.7
Social Epistemology 241
As for compatibility with the flourishing life, while I could realize my present
cowardice, injustice, or intellectual irresponsibility is a source of unhappiness in
my life, I could only recognize testimonial injustice as a source of unhappiness in
retrospect, by having already realized some of my past judgments were mistaken.
Perhaps I have missed opportunities, wasted effort, damaged potentially warm rela-
tionships, or shamed myself by underestimating others’ credibility in the past, and
this could lead me to revise some of my judgments. But, to the extent this involves
receiving new evidence about others’ credibility, it is not clear this changes my
conception of testimonial justice; rather, my own judgment is still the standard I
use to decide what is just, I have simply changed my mind about particular
credibility estimates. To the extent judgments that have caused unhappiness still
seem accurate and appropriate to me, it is unclear how I can think myself more
epistemically just for changing them.
Finally, there is one other reason my objection to testimonial justice is not
necessarily an objection to virtue theory generally: I see nothing inappropriate
about describing testimonial injustice as a vice, and striving to avoid this vice.
Doing so will not guarantee success, of course. Testimonial injustice is always a
mistake, and we don’t make mistakes on purpose, so we can easily fail to notice
when we are being vicious. But there is an important difference between aspiring
to an ideal of correct judgment and striving to avoid mistakes. Whereas my model
for correct judgment more or less has to be closely connected with my own con-
sidered views, the very process of trying to find and guard against mistakes
involves seeking gaps between my own opinions and correct judgment.
The upshot is that, while thinking about testimonial justice is likely to valorise
those opinions about which the Grand Cyclops feels sure, thinking about testimo-
nial injustice stands some chance (even if only a small chance) of undermining
those views. How does one approach this task of guarding against errors? If the
GC recognizes that he is sometimes epistemically unjust (perhaps towards friends
and fellow Klansmen), he can carefully reflect about what causes these mistakes. If,
for example, he realizes that he sometimes oversimplifies or exaggerates his friends’
opinions, he might work to become more careful about oversimplifying and exag-
gerating others’ views. In that case, there is at least a small chance this newfound
habitual caution will enable him to recognize a good argument from, say, an
ACLU lawyer or anti-racist pastor. Perhaps, on the other hand, the GC does not
see any evidence that he is epistemically unjust on any more than very rare occa-
sions. In that case, he is likely to be less motivated to think about what causes
people to mistakenly judge others unjustly. But it is not out of the question; he
might find the bare possibility of his own errors somewhat interesting, or he might
become more attuned to the causes of others’ errors. Instead of merely resenting
unfair generalizations about white Christians, he might spare a moment to wonder
what sort of mistake causes people to form these unfair generalizations. And if he
correctly identifies some of the mental mechanisms that cause unfair stereotypes,
he is then equipped with a conceptual tool that could, if circumstances are favour-
able, help him recognize his own failings. While thinking about intellectual success
242 B. R. Sherman
invites him to assume he can recognize success, thinking about mistakes demands
that he think about what he doesn’t notice.
someone’s credibility will not be a simple matter, with a clearly identifiable stan-
dard of correctness.9 If something like the prominent versions of the dual-process-
ing model of cognition is correct, conscious, reflective thought is likely to be
unable to process judgments as complex as those involved in evaluating someone’s
credibility; attempts to do consciously will be either wildly unreliable, or will end
up needing to appeal to the faster-processing unconscious gut reaction that is, by
hypothesis, biased (cf. Wilson 2002, 164–175, 188–194; Kahneman 2012, Chap. 21).
Boudry and Braeckman (2012) canvass literature suggesting that someone who is
fairly invested in their beliefs (which could presumably include racial prejudices) is
likely to find reasons to retain their beliefs, even if they are consciously striving to
be correct and rational, on account of confirmation bias and cognitive dissonance.
Worse still, Sperber and Mercier suggest that reflective reasoning is generally and
predictably subject to confirmation bias, and that reflective reasoning is only likely
to lead to correct conclusions in the context of persuasive discussion with others
who disagree (Sperber et al. 2010; Sperber and Mercier 2012). If this is true, we
should expect people to be systematically bad at revising their initial, biased credi-
bility judgments through reflective reconsideration, unless they consistently had
the opportunity to discuss those judgments with other who have judged accurately
—an implausible hope, when Fricker seems to suppose testimonial justice would
require quick and reliable corrections.
Not only are there reasons to worry that reflection is unlikely to show us the
correct credibility judgments; there is also evidence that striving for objectivity,
and against prejudicial bias, is likely to be a self-defeating strategy in many con-
texts. Several studies suggest we are prone to becoming more biased when we con-
sider whether or not we are biased. (See, for instance, Pronin, Lin, and Ross 2002;
Uhlmann and Cohen 2007). Moreover, forming specifically anti-prejudicial inten-
tions can have ironic effects through the phenomenon of moral credentialing
(Monin and Miller 2001; Effron, Cameron, and Monin 2009) or rebound effects
(Macrae et al. 1994; Monteith, Sherman, and Devine 1998; Follenfant and Ric
2010). While it might be fruitful to be alert for prejudices and biases, thinking
explicitly about whether we succeed in being unprejudiced may be an ineffectual
strategy for becoming unbiased.
But even if we dismiss empirical worries about implicit bias, there is the more
epistemological problem of under-correction. Suppose we notice instances where
we should “reflate” a credibility judgment. How much should we reflate? Our esti-
mates are, of course, still coming from us—those who were prejudiced in the first
place. And then there is the likelihood that our initial credibility judgment will
have an anchoring effect on our reflation. Some correction is better than none (or
at least so I will suppose for the sake of argument), but the initial problem of
using our own judgment as a standard reappears when we reach the point of
trying to correct for recognized prejudices.
This is not to say someone could not succeed in becoming consistently
unprejudiced, or at correcting prejudicial reactions.10 But it is not clear this sort of
success is best described as a virtue. I see no evidence (and Fricker provides none)
244 B. R. Sherman
that there is some such skill or disposition as avoiding prejudices in general.11 It
makes sense to describe an unreasonable prejudice as an ethical and epistemic vice,
and it makes sense to talk about prejudices as a category because they cause func-
tionally similar ethical and epistemic wrongdoing. But the ways that prejudices
infiltrate our thinking, and the ways we can root them out might not be similar
enough for there to be any unified disposition to avoid them. Having avoided or
uprooted 1000 prejudices will not necessarily prevent you from falling prey to
another prejudice, which might develop in a very different way from the other
1000. Testimonial injustice might be a vice (or family of vices) with no correlative
virtue.
Consider an analogy: the intellectual vice of being-stumped-by-puzzles. Some-
one could be disposed to become frazzled and discouraged whenever presented
with a puzzle. But is there is a contrary virtue of being not-stumped-by-puzzles?
Since there are indefinitely many kinds of puzzles, I would not assume there is
any given set of dispositions that will help someone solve them in general, so I
would not imagine there is any such virtue as not-being-stumped-by-puzzles.
Other virtues (like patience) might be helpful with all puzzles, but I would not
expect there to be some disposition relevant to the class of puzzles. An ability to
respond correctly to an indefinitely varied set of challenges sounds more like a
magical super-power than a psychologically possible disposition for a human
being.
So, it seems our best chance of succeeding in avoiding or correcting prejudices
is to remain vigilant about the kinds of prejudices we might be subject to, how to
identify them, and how to correct for them. We might not be able to succeed
without information from the social sciences, and we can never know the social
sciences won’t turn up new prejudices or biases we had failed to notice. Now per-
haps someone who did remain vigilant, and made good use of social scientific
information, could develop a habitual disposition to successfully respond justly to
others. She could have learned about all the prejudices to which she is vulnerable,
or developed a strategy that does, in fact, deal with all prejudices. But she can
never, from her own perspective, know that there are no other prejudices to worry
about. And if someone falsely believed herself to have the virtue of testimonial jus-
tice, believing she has a stable disposition is likely to lead to complacency, not the
sort of vigilance that might possibly produce the virtue. So even if there is such a
virtue as testimonial justice, it is not clear people can ever be in a position to
reasonably believe they have it, or to aim to be in such a position.
Finally, when we consider whether it is possible to avoid or correct for all the
identity prejudices in our society, it is sobering to reflect that it is even less likely
we can ever develop reliable dispositions to be epistemically just in the broader
sense, of giving to each the intellectual credit she deserves. Since we are all sus-
ceptible to biases (such as confirmation bias and false polarization bias), and we
navigate the world by making predictive generalizations which sometimes prove
false, we are all prone to giving some people too little credit sometimes. There is
no such thing as someone who naively is just to everyone all the time. We will all
Social Epistemology 245
be epistemically unjust to our rivals and critics. We will all sometimes unfairly
mistrust honest salespeople, and underestimate some unusually smart children.
While we might be able to eliminate many unreasonable stereotypes about race
and gender, we will probably always form generalizations about people on the basis
of their culture, economic class, religious views, political orientation, etc. While
these generalizations might be necessary for getting by in life, we should not
assume it is possible for our generalizations to be sufficiently refined to be fully
fair and accurate, as it seems they would need to be if we were to achieve
testimonial justice.
Disclosure statement
No potential conflict of interest was reported by the author.
Notes
[1] One reviewer raises concerns about whether this line of argument is analogous to the
argument that utilitarianism is an incorrect moral theory because it is not a good decision
procedure. If my argument is correct, the situation is worse for Fricker in a couple of
ways. First, a utilitarian could argue (along the lines of Sidgwick 1907, IV.V.3, par. 8) that
utilitarianism is theoretically correct, but should not serve as a guide to action for most
or all people. Fricker, however, commits herself to the view that we should aim for
testimonial justice, and so offers her theory as one suitable to guiding thoughts and
actions. Second, some (notably Railton 1984) argue that utilitarianism (or other forms of
consequentialism) can serve as a standard for right action in practice, not acting as a deci-
sion procedure, but rather serving as a basis for sometimes critically reevaluating our
practices and decisions. My argument in Section 4 aims to show that Fricker’s theory of
testimonial justice is likely to undermine its own aims, even in this more critical and
reflective role.
[2] In very brief form, I will mention here two objections to Fricker’s line of argument. First,
her argument in favor of virtue theory turns centrally on the claim that the standards of
morality and epistemic responsibility are impossible to codify. This claim, though, is at
least somewhat controversial. Second, even if Fricker is right that there is no way to codify
morality and epistemic responsibility, one could adopt a particularist theory of morality
that was not committed to the existence of virtues. Further, various non-particularist
moral theories (including consequentialism and Ross’s (1930) deontological theory of
prima facie duties) and epistemological theories (such as veritism and evidentialism)
could plausibly accept the claim that proper judgment cannot be codified; such theories
stipulate what features of a situation are relevant to ethical or epistemological reasoning,
but do not claim we can know whether we have successfully taken all those relevant
features into account.
[3] Epistemic injustice can, on some occasions, lead to good results, just as lying, breaking
promises, or killing the innocent can sometimes lead to good results. But it is worth not-
ing that the kind of epistemic injustice Fricker discusses is always an error in judgment,
so an agent cannot intentionally engage in epistemic injustice in order to bring about
good consequences.
[4] This is, in a very broad sense, the program of reliabilist virtue epistemology, which is
arguably the branch of virtue theory that fares best against situationist critiques. See
Pritchard (2005), Alfano (2013), 112–114, 140, 149–150, and 158, Fairweather and
Montemayor (2014a, 2014b).
[5] I can, of course, regard someone’s methods or standards of judgment as exemplary, yet
insist that their conclusions are wrong, taking her to be epistemically unlucky. But (a) it
is not clear this is a coherent position, and (b) as Fricker conceives of epistemic justice in
terms of achieving the right results, I cannot regard the exemplary-yet-unlucky person as
just. At best perhaps I can say she is blameless for failing to achieve full justice.
[6] My thanks to an anonymous reviewer for pushing me on this point.
[7] There is also a puzzle here about the place of suspended and tentative judgments in
testimonial justice: Suppose, at t1, someone who I take as a role model for epistemic
responsibility disagrees with me about credibility judgments. Suppose that, at t1, she is
right (and just) about the disputed credibility judgments. At t1, I cannot regard her as a
248 B. R. Sherman
role model for testimonial justice because we disagree, but, if I prioritize epistemic
responsibility over testimonial justice, I might suspend judgment about my disputed judg-
ments, at t2. At t2, then, it is an open question whether my judgments are still unjust;
they are at least not inaccurate anymore, though they are still not as accurate as they
could be. Could I then take her as a role model of testimonial justice? Or would my sus-
pended judgment still constitute a disagreement with her (presumably un-suspended)
judgments? Could a form of skepticism about credibility judgments constitute a form of
epistemic justice, because it reliably avoids unjust credibility judgments?
[8] Might the Grand Cyclops proudly recognize that he is unresponsive to the evidence refut-
ing his beliefs? I can imagine such a case, especially if he thinks that people are apt to
trick him by showing him misleading evidence, or even that being willing to question his
own beliefs would be in some sense unfaithful or disloyal. In the former case, he presum-
ably thinks that his current unwillingness to consider new evidence is based on good past
evidence that his beliefs are true, and that there is a real danger of being tricked. The lat-
ter case is more interesting, as it would involve a kind of moral commitment to being
epistemically irresponsible, in at least some situations. But for present purposes I will
suppose the GC thinks he has been, and remains, properly responsive to the available
evidence.
[9] Fricker, in fact, emphasizes this point (91). I take it she thinks the indefinite complexity
of the task to speak in favor of virtue theory, as she seems to think any non-virtue theory
would be faced with the impossible task of codifying the process of correcting a credibility
judgment. But, first, I think she is wrong about other kinds of theory (see n. 2), and the
very complexity she brings up might make it psychologically impossible for reflection to
play the role she intends.
[10] Sassenberg and Moskowitz (2005, 511) suggest that people might be able to cancel out
their prejudicial associations by striving to think creatively, instead of striving to be
unprejudiced. Lai et al. (2014) present findings which suggest that aiming at rectifying
injustice in our thinking is less effective than aiming for counter-stereotypical injustices—
in the experiment, thinking white people as evil or adversarial, and black people as
friendly or helpful. Such strategies might succeed in eliminating someone’s prejudices, but
not, it seems, through taking testimonial justice as a goal.
[11] Sassenberg and Moskowitz (2005) (see previous note) hint at a possible exception. But
their research only shows that people can be primed to stereotype less; their work does
not show that people can do this for themselves.
References
Alfano, Mark. 2012. “Expanding the Situationist Challenge to Responsibilist Virtue Epistemol-
ogy.” The Philosophical Quarterly 62: 223–49.
Alfano, Mark. 2013. Character as Moral Fiction. New York: Cambridge University Press.
Alfano, Mark. 2014. “Expanding the Situationist Challenge to Reliabilism About Inference.” In
Virtue Epistemology Naturalized: Bridges Between Virtue Epistemology and Philosophy of
Science, edited by Abrol Fairweather, 103–22. New York: Springer.
Boudry, Maarten, and Johan Braeckman. 2012. “How Convenient! The Epistemic Rationale of
Self-validating Belief Systems.” Philosophical Psychology 25: 341–64.
Doris, John M. 1998. “Persons, Situations, and Virtue Ethics.” Noûs 32: 504–30.
Doris, John M. 2002. Lack of Character: Personality and Moral Behaviour. New York: Cambridge
University Press.
Doris, John M. 2005. “Replies: Evidence and Sensibility.” Philosophy and Phenomenological
Research 71: 656–77.
Social Epistemology 249
Doris, John M. 2010. “Heated Agreement: Lack of Character as Being for the Good.” Philosophi-
cal Studies 148: 135–46.
Doris, John M., and Stephen P. Stich. 2007. “As a Matter of Fact: Empirical Perspectives on
Ethics.” In The Oxford Handbook of Contemporary Philosophy, edited by Frank Jackson
and Michael Smith, 114–52. Oxford: Oxford University Press.
Effron, Daniel A., Jessica S. Cameron, and Benoı̂t Monin. 2009. “Endorsing Obama licenses
favoring Whites.” Journal of Experimental Social Psychology 45: 590–3.
Fairweather, Abrol, and Carlos Montemayor. 2014a. “Inferential Abilities and Common Epis-
temic Goods.” In Virtue Epistemology Naturalized: Bridges Between Virtue Epistemology and
Philosophy of Science, edited by Abrol Fairweather, 123–39. New York: Springer.
Fairweather, Abrol, and Carlos Montemayor. 2014b. “Epistemic Dexterity: A Ramseyian Account
of Agent-based Knowledge.” In Naturalizing Epistemic Virtue, edited by Abrol Fairweather
and Owen Flanagan, 118–42. New York: Cambridge University Press.
Follenfant, Alice, and François Ric. 2010. “Behavioral Rebound Following Stereotype Suppres-
sion.” European Journal of Social Psychology 40: 774–82.
Fricker, Miranda. 2007. Epistemic Injustice: Power and the Ethics of Knowing. Oxford: Oxford
University Press.
Harman, Gilbert. 1999. “Moral Philosophy Meets Social Psychology: Virtue Ethics and the
Fundamental Attribution Error.” Proceedings of the Aristotelian Society 99: 315–31.
Harman, Gilbert. 2000. “The Nonexistence of Character Traits.” Proceedings of the Aristotelian
Society 100: 223–6.
Kahneman, Daniel. 2012. Thinking, Fast and Slow. New York: Farrar, Straus and Giroux.
Kornblith, Hilary. 2010. “What Reflective Endorsement Cannot Do.” Philosophy and Phenomeno-
logical Research 80: 1–19.
Kornblith, Hilary. 2013. On Reflection. Oxford: Oxford University Press.
Lai, C. K., M. Marini, S. A. Lehr, C. Cerruti, J. L. Shin, J. A. Joy-Gaba, A. K. Ho et al. 2014.
“Reducing Implicit Racial Preferences: I. A Comparative Investigation of 17 Interven-
tions.” Journal of Experimental Psychology: General 143: 1765–85.
Machiavelli, Nicolo. (1532) 1911. The Prince. Translated by Marriott. New York: E. P. Dutton &
Co.
Mandeville, Bernard. 1714. The Fable of the Bees: Or, Private Vices, Publick Benefits. London:
Printed for J. Roberts, near the Oxford Arms in Warwick Lane.
Macrae, C. Neil, Galen V. Bodenhausen, Alan B. Milne, and Jolanda Jetten. 1994. “Out of Mind
But Back in Sight: Stereotypes on the Rebound.” Journal of Personality and Social Psychol-
ogy 67: 808–17.
Monin, Benoı̂t, and Dale T. Miller. 2001. “Moral Credentials and the Expression of Prejudice.”
Journal of Personality and Social Psychology 81: 33–43.
Monteith, Margo J., Jeffry W. Sherman, and Patricia G. Devine. 1998. “Suppression as a Stereo-
type Control Strategy.” Personality and Social Psychology Review 2: 63–82.
Olin, Lauren, and John M. Doris. 2014. “Vicious Minds: Virtue Epistemology, Cognition, and
Scepticism.” Philosophical Studies 168: 665–92.
Parfit, Derek. 1984. Reasons and Persons. New York: Clarendon Press.
Pritchard, Duncan. 2005. “Virtue Epistemology and the Acquisition of Knowledge.” Philosophical
Explorations 8: 229–43.
Pronin, Emily, Daniel Y. Lin, and Lee Ross. 2002. “The Bias Blind Spot: Perceptions of Bias in
Self Versus Others.” Personality and Social Psychology Bulletin 28: 369–81.
Railton, Peter. 1984. “Alienation, Consequentialism, and the Demands of Morality.” Philosophy
and public affairs 13: 134–71.
Ross, Henry David. 1930. The Right and the Good. Oxford: Oxford University Press.
250 B. R. Sherman
Sassenberg, Kai, and Gordon B. Moskowitz. 2005. “Don’t Stereotype, Think Different! Overcom-
ing Automatic Stereotype Activation by Mindset Priming.” Journal of Experimental Social
Psychology 41: 506–14.
Sidgwick, Henry. 1907. The Methods of Ethics. 7th ed. Indianapolis, IN: Hackett.
Smith, Adam. (1776) 2000. The Wealth of Nations. New York: The Modern Library.
Sperber, Dan, Fabrice Clément, Christophe Heintz, Olivier Mascaro, Hugo Mercier, Gloria
Origgi, and Deirdre Wilson. 2010. “Epistemic Vigilance.” Mind & Language 25: 359–93.
Sperber Dan, and Hugo Mercier. 2012. “Reasoning as a Social Competence.” In Collective
Wisdom, edited by Hélène Landemore and Jon Elster, 368–92. New York: Cambridge
University Press.
Uhlmann, Eric Luis, and Geoffrey L. Cohen. 2007. “‘I Think It, Therefore It’s True’: Effects of
Self-perceived Objectivity on Hiring Discrimination.” Organizational Behavior and Human
Decision Processes 104 (2): 207–23.
Wilson, Timothy D. 2002. Strangers to Ourselves: Discovering the Adaptive Unconscious.
Cambridge, MA: Belknap Press.