You are on page 1of 37

Disagreement

Frances, Bryan and Matheson, Jonathan, "Disagreement", The Stanford Encyclopedia of


Philosophy (Spring 2018 Edition), Edward N. Zalta (ed.), URL =
<https://plato.stanford.edu/archives/spr2018/entries/disagreement/>.

First published Fri Feb 23, 2018


We often find ourselves in disagreement with others. You may think nuclear energy is so
volatile that no nuclear energy plants should be built anytime soon. But you are aware that
there are many people who disagree with you on that very question. You disagree with your
sister regarding the location of the piano in your childhood home, with you thinking it was
in the primary living area and her thinking it was in the small den. You and many others
believe Jesus Christ rose from the dead; millions of others disagree.
It seems that awareness of disagreement can, at least in many cases, supply one with a
powerful reason to think that one’s belief is false. When you learned that your sister thought
the piano had been in the den instead of the living room, you acquired a good reason to think
it really wasn’t in the living room, as you know full well that your sister is a generally
intelligent individual, has the appropriate background experience (she lived in the house too),
and is about as honest, forthright, and good at remembering events from childhood as you
are. If, in the face of all this, you stick with your belief that the piano was in the living room,
will your retaining that belief be reasonable?
In the piano case there is probably nothing important riding on the question of what to do in
the face of disagreement. But in many cases our disagreements are of great weight, both in
the public arena and in our personal lives. You may disagree with your spouse or partner
about whether to live together, whether to get married, where you should live, or how to raise
your children. People with political power disagree about how to spend enormous amounts
of money, or about what laws to pass, or about wars to fight. If only we were better able to
resolve our disagreements, we would probably save millions of lives and prevent millions of
others from living in poverty.
This article examines the central epistemological issues tied to the recognition of
disagreement.
Compared to many other topics treated in this encyclopedia, the epistemology of
disagreement is a mere infant. While the discussion of disagreement isn’t altogether absent
from the history of philosophy, philosophers didn’t start, as a group, thinking about the topic
in a rigorous and detailed way until the 21st century. For that reason, it is difficult to know
what the primary issues and questions are concerning the general topic. At this early stage of
investigation we are just getting our feet wet. In this essay, we begin by trying to motivate
what we think should be the primary issues and questions before we move on to look at some
of the main ideas in the literature. In so doing we also introduce some new terminology and
make some novel distinctions that we think are helpful in navigating this relatively recent
debate.

• 1. Disagreement and Belief


• 2. Belief-Disagreement vs. Action-Disagreement
• 3. Response to Disagreement vs. Subsequent Level of Confidence
• 4. Disagreement with Superiors, Inferiors, Peers, and Unknowns
• 5. Peer Disagreements
o 5.1 The Equal Weight View
o 5.2 The Steadfast View
o 5.3 The Justificationist View
o 5.4 The Total Evidence View
o 5.5 Other Issues
• 6. Disagreement By the Numbers
• 7. Disagreement and Skepticism
• Bibliography
• Academic Tools
• Other Internet Resources
• Related Entries

1. Disagreement and Belief


To a certain extent, it may seem that there are just three doxastic attitudes to adopt regarding
the truth of a claim: believe it’s true, believe it’s false (i.e., disbelieve it), and suspend
judgment on it. In the most straightforward sense, two individuals disagree about a
proposition when they adopt different doxastic attitudes toward the same proposition (i.e.,
one believes it and one disbelieves it, or one believes it and one suspends judgment). But of
course there are levels of confidence one can have regarding a proposition as well. We may
agree that global warming is occurring but you may be much more confident than I am. It
can be useful to use ‘disagreement’ to cover any difference in levels of confidence: if XX has
one level of confidence regarding belief BB’s truth while YY has a different level of
confidence, then they “disagree” about BB—even if this is a slightly artificial sense of
‘disagree’. These levels of confidence, or degrees of belief, are often represented as point
values on a 0–1 scale (inclusive), with larger values indicating greater degrees of confidence
that the proposition is true. Even if somewhat artificial, such representations allow for more
precision in discussing cases.
We are contrasting disagreements about belief from disagreements about matters of taste.
Our focus is on disagreements where there is a fact of the matter, or at least the participants
are reasonable in believing that there is such a fact.
2. Belief-Disagreement vs. Action-Disagreement
Suppose Jop and Dop are college students who are dating. They disagree about two matters:
whether it’s harder to get top grades in economics classes or philosophy classes, and whether
they should move in together this summer. The first disagreement is over the truth of a claim:
is the claim (or belief) ‘It is harder to get top grades in economics classes compared to
philosophy classes’ true or not? The second disagreement is over an action: should we move
in together or not (the action = moving in together)? Call the first kind of disagreement belief-
disagreement; call the second kind action-disagreement.
The latter is very different from the former. Laksha is a doctor faced with a tough decision
regarding one of her patients. She needs to figure out whether it’s best, all things considered,
to just continue with the medications she has been prescribing or stop them and go with
surgery. She confers closely with some of her colleagues. Some of them say surgery is the
way to go, others say she should continue with medications and see what happens, but no
one has a firm opinion: all the doctors agree that it’s a close call, all things considered. Laksha
realizes that as far as anyone can tell it really is a tie.
In this situation Laksha should probably suspend judgment on each of the two claims
‘Surgery is the best overall option for this patient’ and ‘Medication is the best overall option
for this patient’. When asked ‘Which option is best?’ she should suspend judgment.
That’s all well and good, but she still has to do something. She can’t just refuse to treat the
patient. Even if she continues to investigate the case for days and days, in effect she has made
the decision to not do surgery. She has made a choice even if she dithers.
The point is this: when it comes to belief-disagreements, there are three broad options with
respect to a specific claim: believe it, disbelieve it, and suspend judgment on it. (And of
course there are a great many levels of confidence to take as well.) But when it comes
to action-disagreements, there are just two options with respect to an action XX: do XX,
don’t do XX. Suspending judgment just doesn’t exist when it comes to an action. Or, to put
it a different way, suspending judgment on whether to do XX does exist but is pretty much
the same thing as not doing XX, since in both cases you don’t do XX (Feldman 2006c).
Thus, there are disagreements over what to believe and what to do. Despite this distinction,
we can achieve some simplicity and uniformity by construing disagreements over what to do
as disagreements over what to believe. We do it this way: if we disagree over whether to do
action XX, we are disagreeing over the truth of the claim ‘We should do XX’ (or ‘I should
do XX’ or ‘XX is the best thing for us to do’; no, these aren’t all equivalent). This translation
of action-disagreements into claim-disagreements makes it easy for us to
construe all disagreements as disagreements about what to believe, where the belief may or
may not concern an action. Keep in mind, though, that this “translation” doesn’t mean that
action-disagreements are just like belief-disagreements that don’t involve actions: the former
still requires a choice on what one is actually going to do.
With those points in mind, we can formulate the primary questions about the epistemology
of disagreement.
However, it is worth noting that agreement also has epistemological implications. If learning
that a large number and percentage of your epistemic peers or superiors disagree with you
should probably make you lower your confidence in your belief, then learning that those
same individuals agree with you should probably make you raise your confidence in your
belief—provided they have greater confidence in it than you did before you found out about
their agreement.
In posing the questions we start with a single individual who realizes that one or more other
people disagree/agree with her regarding one of her beliefs. We can formulate the questions
with regard to just disagreement or to agreement and disagreement; we also have the choice
of focusing on just agreement/disagreement or going with levels of confidence.
Here are the primary epistemological questions for just disagreement and no levels of
confidence:
Response Question: Suppose you realize that some people disagree with your belief BB.
How must you respond to the realization in order for that response to be epistemically
rational (or perhaps wise)?
Belief Question: Suppose you realize that some people disagree with your belief BB. How
must you respond to the realization in order for your subsequent position on BB to be
epistemically rational?
Here are the questions for agreement/disagreement plus levels of conviction:
Response Question*: Suppose you realize that some people have a confidence level
in BB that is different from yours. How must you respond to the realization in order for
that response to be epistemically rational (or perhaps wise)?
Belief Question*: Suppose you realize that some people have a confidence level in BB that
is different from yours. How must you respond to the realization in order for your
subsequentposition on BB to be epistemically rational?

3. Response to Disagreement vs. Subsequent Level of Confidence


A person can start out with a belief that is irrational, obtain some new relevant evidence
concerning that belief, respond to that new evidence in a completely reasonable way, and yet
end up with an irrational belief. This fact is particularly important when it comes to posing
the central questions regarding the epistemology of disagreement (Christensen 2011).
Suppose Bub’s belief that Japan is a totalitarian state, belief JJ, is based on a poor reading of
the evidence and a raging, irrational bias that rules his views on this topic. He has let his bias
ruin his thinking through his evidence properly.
Then he gets some new information: some Japanese police have been caught on film beating
government protesters. After hearing this, Bub retains his old confidence level in JJ.
We take it that when Bub learns about the police, he has not acquired some new information
that should make him think ‘Wait a minute; maybe I’m wrong about Japan’. He shouldn’t
lose confidence in his belief JJ merely because he learned some facts that do not cast any
doubt on his belief!
The lesson of this story is this: Bub’s action of maintaining his confidence in his belief as a
result of his new knowledge is reasonable even though his retained belief itself is
unreasonable. Bub’s assessment of the original evidence concerning JJ was irrational, but
his reaction to the newinformation was rational; his subsequent belief in JJ was (still)
irrational (because although the video gives a little support to JJ, it’s not much). The question,
‘Is Bub being rational after he got his new knowledge?’ has two reasonable interpretations:
‘Is his retained belief in JJ rational after his acquisition of the new knowledge?’ vs. ‘Is his
response to the new knowledge rational?’
On the one hand, “rationality demands” that upon his acquisition of new knowledge Bub
drop his belief JJ that Japan is a totalitarian state: after all, his overall evidence for it is very
weak. On the other hand, “rationality demands” that upon his acquisition of new knowledge
Bub keep his belief JJgiven that that acquisition—which is the only thing that’s happened to
him—gives him no reason to doubt JJ. This situation still might strike you as odd. After all,
we’re saying that Bub is being rational in keeping an irrational belief! But no: that’s not what
we’re saying. The statement ‘Bub is being rational’ is ambiguous: is it saying that Bub’s
retained belief JJ is rational or is it saying that Bub’s retaining of that belief was rational?
The statement can take on either meaning, and the two meanings end up with different
verdicts: the retained belief is irrational but the retaining of the belief is rational. In the first
case, a state is being evaluated, in the second, an action is being evaluated.
Consider a more mundane case. Jack hears a bump in the night and irrationally thinks there
is an intruder in his house (he has long had three cats and two dogs, so he should know by
now that bumps are usually caused by his pets; further, he has been a house owner long
enough to know full well that old houses like his make all sorts of odd noises at night, pets
or no). Jack has irrational belief BB: there is an intruder upstairs or there is an intruder
downstairs. Then after searching upstairs he learns that there is no intruder upstairs. Clearly,
the reasonable thing for him to do is infer that there is an intruder downstairs—that’s the
epistemically reasonable cognitive move to make in response to the new information,
given—despite the fact that the new belief ‘There is an intruder downstairs’ is irrational in
an evidential sense.
These two stories show that one’s action of retaining one’s belief—that intellectual action—
can be epistemically fine even though the retained belief is not. And, more importantly, we
have to distinguish two questions about the acquisition of new information (which need not
have anything at all to do with disagreement):

• After you acquire some new information relevant to a certain belief BB of yours, what
should your new level of confidence in BB be in order for your new level of
confidence regarding BB to be rational?
• After you acquire some new information relevant to a certain belief BB of yours, what
should your new level of confidence in BB be in order for your response to the new
information to be rational?
The latter question concerns an intellectual action (an intellectual response to the acquisition
of new information), whereas the former question concerns the subsequent level of
confidence itself, the new confidence level you end up with, which comes about partially as
a causal result of the intellectual action. As we have seen with the Japan and intruder stories
the epistemic reasonableness of the one is partially independent of that of the other.
4. Disagreement with Superiors, Inferiors, Peers, and Unknowns
A child has belief BB that Hell is a real place located in the center of the earth. You disagree.
This is a case in which you disagree with someone who you recognize to be your epistemic
inferior on the question of whether BB is true. You believe that Babe Ruth was the greatest
baseball player ever. Then you find out that a sportswriter who has written several books on
the history of baseball disagrees, saying that so-and-so was the greatest ever. In this case, you
realize that you’re disagreeing with an epistemic superior on the matter, since you know that
you’re just an amateur when it comes to baseball. In a third case, you disagree with your
sister regarding the name of the town your family visited on vacation when you were
children. You know from long experience that your memory is about as reliable as hers on
matters like this one; this is a disagreement with a recognized epistemic peer.
There are several ways to define the terms ‘superior’, ‘inferior’, and ‘peer’ (Elga, 2007; see
section 5 below).
You can make judgments about how likely someone is compared to you when it comes to
answering ‘Is belief BB true?’ correctly. If you think she is more likely (e.g., you suppose
that the odds that she will answer it correctly are about 90% whereas your odds are just
around 80%), then you think she is your likelihood superior on that question; if you think she
is less likely, then you think she is your likelihood inferior on that question; if you think she
is about equally likely, then you think she is your likelihood peer on that question. Another
way to describe these distinctions is by referencing the epistemic position of the various
parties. One’s epistemic position describes how well-placed they are, epistemically speaking,
with respect to a given proposition. The better one’s epistemic position, the more likely one
is to be correct.
There are many factors that help determine one’s epistemic position, or how likely one is to
answer ‘Is belief BB true?’ correctly. Here are the main ones (Frances 2014):

• cognitive ability had while answering the question


• evidence brought to bear in answering the question
• relevant background knowledge
• time devoted to answering the question
• distractions encountered in answering the question
• relevant biases
• attentiveness when answering the question
• intellectual virtues possessed
Call these Disagreement Factors. Presumably, what determines that XX is more likely
than YY to answer ‘Is BB true?’ correctly are the differences in the Disagreement Factors
for XX and YY.
For any given case of disagreement between just two people, the odds are that they will not
be equivalent on all Disagreement Factors: XX will surpass YY on some factors and YY will
surpass XX on other factors. If you are convinced that a certain person is clearly lacking
compared to you on many Disagreement Factors when it comes to answering the question
‘Is BB true?’ then you’ll probably say that you are more likely than she is to answer the
question correctly provided you are not lacking compared to her on other Disagreement
Factors. If you are convinced that a certain person definitely surpasses you on many
Disagreement Factors when it comes to answering ‘Is BB true?’ then you’ll probably say that
you are less likely than she is to answer the question correctly provided you have no
advantage over her when it comes to answering ‘Is BB true?’. If you think the two of you
differ in Disagreement Factors but the differences do not add up to one person having a net
advantage (so you think any differences cancel out), then you’ll think you are peers on that
question.
Notice that in this peer case you need not think that the two of you are equal on each
Disagreement Factor. On occasion, a philosopher will define ‘epistemic peer’ so
that XX and YY are peers on belief BB if and only if they are equal on all Disagreement
Factors. If XX and YY are equal on all Disagreement Factors, then they will be equally likely
to judge BB correctly, but the reverse does not hold. Deficiencies of a peer in one area may
be accounted for by advantages in other areas with the final result being that the two
individuals are in an equivalently good epistemic position despite the existence of some
inequalities regarding particular disagreement factors.
In order to understand the alternative definitions of ‘superior’, ‘inferior’, and ‘peer’, we will
look at two cases of disagreement (Frances 2014).
Suppose I believe BB, that global warming is happening. Suppose I also believe PP, that
Taylor is my peer regarding BB in this sense: I think we are equally likely to
judge BB correctly. I have this opinion of Taylor because I figure that she knows about as
well as I do the basic facts about expert consensus, she understands and respects that
consensus about as much as I do, and she based her opinion of BB on those facts. (I know
she has some opinion on BB but I have yet to actually hear her voice it.) Thus, I think she is
my likelihood peer on BB.
But in another sense I don’t think she is my peer on BB. After all, if someone asked me
‘Suppose you find out later today that Taylor sincerely thinks BB is false. What do you think
are the odds that you’ll be right and she’ll be wrong about BB?’ I would reply with ‘Over
95%!’ I would answer that way because I’m very confident in BB’s truth and if I find out that
Taylor disagrees with that idea, then I will be quite confident that she’s wrong and I’m right.
So in that sense I think I have a definite epistemic advantage over her: given how confident
I am in BB, I think that if it turns out we disagree over BB, there is a 95% chance I’m right
and she’s wrong. Of course, given that I think that we are equally likely to judge BB correctly
and I’m very confident in BB, I’m also very confident that she will judge BB to be true; so
when I’m asked to think about the possibility that Taylor thinks BB is false, I think I’m being
asked to consider a very unlikely scenario. But the important point here is this: if I have the
view that if it turns out that she really thinks BB is false then the odds that I’m right and she’s
wrong are 95%, then in some sense my view is that she’s not “fully” my peer on BB, as I
think that when it comes to the possibility of disagreement I’m very confident that I will be
in the right and she won’t be.
Now consider another case. Suppose Janice and Danny are the same age and take all the same
math and science classes through high school. They are both moderately good at math. In
fact, they almost always get the same grades in math. On many occasions they come up with
different answers for homework problems. As far as they have been able to determine, in
those cases 40% of the time Janice has been right, 40% of the time Danny has been right, and
20% of the time they have both been wrong. Suppose they both know this interesting fact
about their track records! Now they are in college together. Danny believes, on the basis of
their track records, that on the next math problem they happen to disagree about, the
probability that Janice’s answer is right equals the probability that his answer is right—unless
there is some reason to think one of them has some advantage in this particular case (e.g.,
Danny has had a lot more time to work on it, or some other significant discrepancy in
Disagreement Factors). Suppose further that on the next typical math problem they work on
Danny thinks that neither of them has any advantage over the other this time around. And
then Danny finds out that Janice got an answer different from his.
In this math case Danny first comes to think that BB (his answer) is true. But he also thinks
that if he were to discover that Janice thinks BB is false, the probability that he is right and
Jan is wrong are equal to the probability that he is wrong and Janice is right. That’s very
different from the global warming case in which I thought that if I were to discover that
Taylor thinks BB is false, the probability that I’m right and she’s wrong are 19 times the
probability that I’m wrong and she’s right (95% is 19 times 5%).
Let’s say that I think you’re my conditional peer on BB if and only if before I find out your
view on BBbut after I have come to believe BB I think that if it turns out that you
disbelieve BB, then the chance that I’m right about BB is equal to the chance that you’re right
about BB. So although I think Taylor is my likelihood peer on the global warming belief, I
don’t think she is my conditional peer on that belief. I think she is my conditional inferior on
that matter. But in the math case Danny thinks Janice is his likelihood peer and his
conditional peer on the relevant belief.
So, central to answering the Response Question and the Belief Question is the following:
Better Position Question: Are the people who disagree with BB in a better epistemic
position to correctly judge the truth-value of the belief than the people who agree with BB?
Put in terms of levels of confidence we get the following:
Better Position Question*: Are the people who have a confidence level in BB that is
different from yours in a better epistemic position to correctly judge the truth-value of the
belief than the people who have the same confidence level as yours?
The Better Position Question is often not very easy to answer. For the majority of cases of
disagreement, with XX realizing she disagrees with YY, XX will not have much evidence to
think YY is her peer, superior, or inferior when it comes to correctly judging BB. For
instance, if I am discussing with a neighbor whether our property taxes will be increasing
next year, and I discover that she disagrees with me, I may have very little idea how we
measure up on the Disagreement Factors. I may know that I have more raw intelligence than
she has, but I probably have no idea how much she knows about local politics, how much
she has thought about the issue before, etc. I will have little basis for thinking I’m her
superior, inferior, or peer. We can call these the unknown cases. Thus, when you discover
that you disagree with someone over BB, you need not think, or have reason to think, that
she is your peer, your superior, or your inferior when it comes to judging BB.
A related question is whether there is any important difference between cases where you are
justified in believing your interlocutor is your peer and cases where you may be justified in
believing that your interlocutor is not your peer but lack any reason to think that you, or your
interlocutor, are in the better epistemic position. Peerhood is rare, if not entirely a fictional
idealization, yet in many real-world cases of disagreement we are not justified in making a
judgment regarding which party is better positioned to answer the question at hand. The
question here is whether different answers to the Response Question and the Belief Question
are to be given in these two cases. Plausibly, the answer is no. An analogy may help. It is
quite rare for two people to have the very same weight. So for any two people it is quite
unlikely that they are ‘weight peers’. That said, in many cases it may be entirely unclear
which party weighs more than the other party, even if they agree that it is unreasonable to
believe they weigh the exact same amount. Rational decisions about what to do where the
weight of the party matters do not seem to differ in cases where there are ‘weight peers’ and
cases where the parties simply lack a good reason to believe either party weighs more.
Similarly, it seems that the answers to the Response Question and the Belief Question will
not differ in cases of peer disagreement and cases where the parties simply lack any good
reason to believe that either party is epistemically better positioned on the matter.
Another challenge in answering the Better Position Question occurs when you are a novice
about some topic and you are trying to determine who the experts on the topic are. This is
what Goldman terms the ‘novice/expert problem’ (Goldman 2001). While novices ought to
turn to experts for intellectual guidance, a novice in some domain seems ill-equipped to even
determine who the experts in that domain are. Hardwig (1985, 1991) claims that such novice
reliance on an expert must necessarily be blind, and thus exhibit an unjustified trust. In
contrast, Goldman explores five potential evidential sources for reasonably determining
someone to be an expert in a domain:

A. Arguments presented by the contending experts to support their own views and
critique their rivals’ views.
B. Agreement from additional putative experts on one side or other of the subject in
question.
C. Appraisals by “meta-experts” of the experts’ expertise (including appraisals reflected
in formal credentials earned by the experts).
D. Evidence of the experts’ interests and biases vis-a-vis the question at issue.
E. Evidence of the experts’ past “track-records”. (Goldman 2001, 93.)
The vast majority of the literature on the epistemic significance of disagreement, however,
concerns recognized peer disagreement (for disagreement with superiors, see Frances 2013).
We turn now to this issue.
5. Peer Disagreements
Before we begin our discussion of peer disagreements it is important to set aside a number
of cases. Epistemic peers with respect to PP are in an equally good epistemic position with
respect to PP. Peers about PP can both be in a very good epistemic position with respect
to PP, or they could both be in a particularly bad epistemic position with respect to PP. Put
differently, two fools could be peers. However, disagreement between fool peers has not been
of particular epistemic interest in the literature. The literature on peer disagreement has
instead focused on disagreement between competent epistemic peers, where competent peers
with respect to PP are in a good epistemic position with respect to PP—they are likely to be
correct about PP. Our discussion of peer disagreement will be restricted to competent peer
disagreement. In the literature on peer disagreements, four main views have emerged: the
Equal Weight View, the Steadfast View, the Justificationist View, and the Total Evidence
View.
5.1 The Equal Weight View
The Equal Weight View is perhaps the most prominently discussed view on the epistemic
significance of disagreement. Competitor views of peer disagreements are best understood
as a rejection of various aspects of the Equal Weight View, so it is a fitting place to begin
our examination. As we see it, the Equal Weight View is a combination of three claims:
Defeat: Learning that a peer disagrees with you about PP gives you a reason to believe you
are mistaken about PP.
Equal Weight: The reason to think you are mistaken about PP coming from your peer’s
opinion about PP is just as strong as the reason to think you are correct about PP coming
from your opinion about PP.
Independence: Reasons to discount your peer’s opinion about PP must be independent of the
disagreement itself.
Defenses of the Equal Weight View in varying degrees can be found in Bogardus 2009,
Christensen 2007, Elga 2007, Feldman 2006, and Matheson 2015a. Perhaps the best way to
understand the Equal Weight View comes from exploring the motivation that has been given
for the view. We can distinguish between three broad kinds of support that have been given
for the view: examining central cases, theoretical considerations, and the use of analogies.
The central case that has been used to motivate the Equal Weight View is Christensen’s
Restaurant Check Case.
The Restaurant Check Case. Suppose that five of us go out to dinner. It’s time to pay the
check, so the question we’re interested in is how much we each owe. We can all see the bill
total clearly, we all agree to give a 20 percent tip, and we further agree to split the whole cost
evenly, not worrying over who asked for imported water, or skipped desert, or drank more
of the wine. I do the math in my head and become highly confident that our shares are $43
each. Meanwhile, my friend does the math in her head and becomes highly confident that our
shares are $45 each. (Christensen 2007, 193.)
Understood as a case of peer disagreement, where the friends have a track record of being
equally good at such calculation, and where neither party has a reason to believe that on this
occasion either party is especially sharp or dull, Christensen claims that upon learning of the
disagreement regarding the shares he should become significantly less confident that the
shares are $43 and significantly more confident that they are $45. In fact, he claims that these
competitor propositions ought to be given roughly equal credence.
The Restaurant Check Case supports Defeat since in learning of his peer’s belief, Christensen
becomes less justified in his belief. His decrease in justification is seen by the fact that he
must lower his confidence to be in a justified position on the issue. Learning of the
disagreement gives him reason to revise and an opportunity for epistemic improvement.
Further, the Restaurant Check Case supports Equal Weight, since the reason Christensen
gains to believe he is mistaken is quite strong. Since he should be equally confident that the
shares are $45 as that they are $43, his reasons equally support these claims. Giving the peer
opinions equal weight has typically been understood to require ‘splitting the difference’
between the peer opinions, at least when the two peer opinions exhaust one’s evidence about
the opinions on the matter. Splitting the difference is a kind of doxastic compromise that calls
for the peers to meet in the middle. So, if one peer believes PP and one peer disbelieves PP,
giving the peer opinions equal weight would call for each peer to suspend judgment about PP.
Applied to the richer doxastic picture that includes degrees of belief, if one peer as a 0.7
degree of belief that PP and the other has a 0.3 degree of belief that PP, giving the peer
opinions equal weight will call for each peer to adopt a 0.5 degree of belief that PP. It is
important to note that what gets ‘split’ is the peer attitudes, not the content of the relevant
propositions. For instance, in the Restaurant Check Case, splitting the difference does not
require believing that the shares are $44. Perhaps it is obvious that the shares are not an even
amount. Splitting the difference is only with respect to the disparate doxastic attitudes
concerning any one proposition (the disputed target proposition). The content of the
propositions believed by the parties are not where the compromise occurs. Finally, the
Restaurant Check Case supports Independence. The reasons that Christensen could have to
discount his peer’s belief about the shares could include that he had a little too much to drink
tonight, that he is especially tired, that Christensen double checked but his friend didn’t, etc.,
but could not include that the shares actually are $43, that Christensen disagrees, etc.
Theoretical support for the Equal Weight View comes from first thinking about ordinary
cases of testimony. Learning that a reliable inquirer has come to believe a proposition gives
you a reason to believe that proposition as well. The existence of such a reason does not seem
to depend upon whether you already have a belief about that proposition. Such testimonial
evidence is some evidence to believe the proposition regardless of whether you agree,
disagree, or have never considered the proposition. This helps motivate Defeat, since a reason
to believe the proposition when you disbelieve it amounts to a reason to believe that you have
made a mistake regarding that proposition. Similar considerations apply to more fine-grained
degrees of confidence. Testimonial evidence that a reliable inquirer has adopted a 0.8 degree
of belief that PP gives you a reason to adopt a 0.8 degree of belief toward PP, and this seems
to hold regardless of whether you already have a level of confidence that PP.
Equal Weight is also motivated by considerations regarding testimonial evidence. The weight
of a piece of testimonial evidence is proportional to the epistemic position of the testifier (or
what the hearer’s evidence supports about the epistemic position of the testifier). So, if you
have reason to believe that Jai’s epistemic position with respect to PP is inferior to Mai’s,
then discovering that Jai believes PP will be a weaker reason to believe PP than discovering
that Mai believes PP. However, in cases of peer disagreement, both parties are in an equally
good epistemic position, so it would follow that their opinions on the matter should be given
equal weight.
Finally, Independence has been theoretically motivated by examining what kind of reasoning
its denial would permit. In particular, a denial of independence has been thought to permit a
problematic kind of question-begging by allowing one to use one’s own reasoning to come
to the conclusion that their peer is mistaken. Something seems wrong with the following line
of reasoning, “My peer believes not-PP, but I concluded PP, so my peer is wrong” or “I
thought SS was my peer, but SS thinks not-PP, and I think PP, so SS is not my peer after all”
(see Christensen (2011). Independence forbids both of these ways of blocking the reason to
believe that you are mistaken from the discovery of the disagreement.
The Equal Weight View has also been motivated by way of analogies. Of particular
prominence are analogies to thermometers. Thermometers take in pieces of information as
inputs and given certain temperature verdicts as outputs. Humans are a kind of cognitive
machine that takes in various kinds of information as inputs and give doxastic attitudes as
outputs. In this way, humans and thermometers are analogous. Support for the Equal Weight
View has come from examining what it would be rational to believe in a case of peer
thermometer disagreement. Suppose that you and I know we have equally reliable
thermometers and while investigating the temperature of the room we are in discover that our
thermometers give different outputs (yours reads ‘75’ and mine reads ‘72’). What is it rational
for us to believe about the room temperature? It seems it would be irrational for me to
continue believing it was 72 simply because that was the output of the thermometer that I
was holding. Similarly, it seems irrational for me to believe that your thermometer is
malfunctioning simply because my thermometer gave a different output. It seems that I would
need some information independent from this ‘disagreement’ to discount your thermometer.
So, it appears that I have been given a reason to believe that the room’s temperature is not 72
by learning of your thermometer, that this reason is as strong as my reason to believe it is 72,
and that this reason is only defeated by independent considerations. If the analogy holds, then
we have reason to accept each of the three theses of the Equal Weight View.
The Equal Weight View is not the only game in town when it comes to the epistemic
significance of disagreement. In what follows we will examine the competitor views of
disagreement highlighting where and why they depart from the Equal Weight View.
5.2 The Steadfast View
On the spectrum of views on the epistemic significance of disagreement, the Equal Weight
View and the Steadfast View lie on opposite ends. While the Equal Weight View is quite
conciliatory, the Steadfast View maintains that sticking to one’s guns in a case of peer
disagreement can be rational. That is, discovering a peer disagreement does not mandate any
doxastic change. While the Equal Weight View may be seen to emphasize intellectual
humility, the Steadfast View emphasizes having the courage of your convictions. Different
motivations for Steadfast Views can be seen to reject distinct aspects of the Equal Weight
View. We have organized the various motivations for the Steadfast View according to which
aspect of the Equal Weight View it (at least primarily) rejects.
5.2.1 Denying Defeat
Defeat has been rejected by defenders of the Steadfast View in a number of ways.
First, Defeat has been denied with an appeal to private evidence. Peter van Inwagen (1996)
has defended the Steadfast View by maintaining that in cases of peer disagreement one can
appeal to having an incommunicable insight or special evidence that the other party lacks.
The basic idea is that if I have access to a special body of evidence that my peer lacks access
to, then realizing that my peer disagrees with me need not give me a reason to think that I’ve
made any mistake. After all, my peer doesn’t have everything that I have to work with
regarding an evaluation of PP and it can be reasonable to think that if the peer were to be
aware of everything that I am aware of, she would also share my opinion on the matter.
Further, some evidence is undoubtedly private. While I can tell my peer about my intuitions
or my experiences, I cannot give him my intuitions or experiences. Given our limitations,
peers can never fully share their evidence. However, if the evidence isn’t fully shared, then
my peer evaluating his evidence one way needn’t show that I have mis-evaluated my
evidence. Our evidence is importantly different. While van Inwagen’s claims may entail that
the two disagreeing parties are not actually peers due to their evidential differences, these
consideration may be used to resist Defeat at least on looser conceptions of peerhood that do
not require evidential equality.
A related argument is made by Huemer (2011), who argues for an agent-centered account of
evidence. On this account, an experience being your own evidentially counts for more than
someone else’s experience. So, with this conception of evidence in hand there will be an
important evidential asymmetry even in cases where both parties share all their evidence.
Defenders of the Equal Weight View have noted that these considerations cut both ways (see
Feldman 2006). For instance, while you may not be able to fully share your evidence with
your peer, these same considerations motivate that your peer similarly cannot fully share his
or her evidence with you. So, the symmetry that motivated the Equal Weight View may still
obtain since both parties have private evidence. A relevant asymmetry only obtains if one
has special reason to believe that their body of private evidence is privileged over their peer’s,
and the mere fact that it is one’s own would not do this. Feldman’s Dean on the Quad case
can also help make this clear.
Dean on the Quad. Suppose you and I are standing by the window looking out on the quad.
We think we have comparable vision and we know each other to be honest. I seem to see
what looks to me like the dean standing out in the middle of the quad. (Assume that this is
not something odd. He’s out there a fair amount.) I believe that the dean is standing on the
quad. Meanwhile, you seem to see nothing of the kind there. You think that no one, and thus
not the dean, is standing in the middle of the quad. We disagree. Prior to our saying anything,
each of us believes reasonably. Then I say something about the dean’s being on the quad,
and we find out about our situation. (2007, 207–208.)
Feldman takes this case to be one where both parties should significantly conciliate even
though it is clear that both possess private evidence. While both parties can report about their
experience, neither party can give their experience to the other. The experiential evidence
possessed by each party is private. So, if conciliation is still called for, we have reason to
question the significance of private evidence.
Second, Defeat has been denied by focusing on how things seem to the subject. Plantinga
(2000a) has argued that there is a sense of justification that is simply doing the best that one
can. Plantinga notes that despite all the controlled variables an important asymmetry remains
even in cases of peer disagreement. In cases where I believe PP and I discover that my peer
disbelieves PP, often PP will continue to seem true to me. That is, there is an important
phenomenological difference between the two peers—different things seem true to them.
Plantinga claims that given that we are fallible epistemic creatures some amount of epistemic
risk is inevitable, and given this, we can do no better than believe in accordance with what
seems true to us. So, applied to cases of peer disagreement, even upon learning that my peer
disbelieves PP, so long as PP continues to seem true to me, it is rational for me to continue
to believe. Any reaction to the disagreement will contain some epistemic risk, so I might as
well go with how things seem to me. A similar defense of Steadfast Views which emphasizes
the phenomenology of the subject can be found in Henderson et al. 2017.
While an individual may not be to blame for continuing to believe as things seem to them,
defenders of the Equal Weight View have claimed that the notion of epistemic justification
at issue here is distinct. Sometimes doing the best one can is insufficient, and while some
epistemic risk is inevitable, it does not follow that the options are equally risky. While your
belief may still seem true to you having discovered the disagreement, other things that seem
true to you are relevant as well. For instance, it will seem to you that your interlocutor is an
epistemic peer (that they are in an equally good epistemic position on the matter) and that
they disagree with you. Those additional seeming states have epistemological import. In
particular, they give you reason to doubt that the truth about the disputed belief is as it seems
to you. The mere fact that your belief continues to seem true to you is unable to save its
justificatory status. Consider the Müller-Lyer illusion:

To most, line BB seems to be longer, but a careful measurement reveals that AA and BB are
of equal lengths. Despite knowing of the illusion, however, line BB continues to seem longer
to many. Nevertheless, given that it also seems that a reliable measuring indicates that the
lines are of equal length, one is not justified in believing that BB is longer, despite it
continuing to seem that way. This result holds even when we appreciate our fallibility and
the fallibility of measuring instruments. A parallel account appears to apply to cases of peer
disagreement. Even if your original belief continues to seem true to you, you have become
aware of information that significantly questions that seeming state. Further, we can imagine
a scenario where PP seems true to me and I subsequently discover 10,000 peers and superiors
on the issue that disagree with me about PP. Nevertheless, when I contemplate PP, it still
seems true to me. In such a case, sticking to my guns about PP seems to neither be doing the
best that I can nor the reasonable thing to do.
Third, Defeat has been denied by denying that peer opinions about PP are evidence that
pertains to PP. Kelly (2005) distinguishes the following three claims:

1. Proposition PP is true.
2. Body of evidence EE is good evidence that PP is true.
3. A competent peer believes PP on the basis of EE.
Kelly (2005) argues that while 3 is evidence for 2 it is not evidence for 1. If 3 is not evidence
for 1, then in learning 3 (by discovering the peer disagreement) one does not gain any
evidence relevant to the disputed proposition. If learning of the peer disagreement doesn’t
affect one’s evidence relevant to the disputed proposition, then such a discovery makes no
change for which doxastic attitude is justified for the peers to take toward the target
proposition. On this view, the discovery of peer disagreement makes no difference for what
you should believe about the disputed proposition.
Why think that 3 is not evidence for 1? Kelly (2005) cites several reasons. First, when people
cite their justification for their beliefs, they do not typically cite things like 3. We typically
treat the fact that someone believes a proposition as the result of the evidence for that
proposition, not as another piece of evidence for that proposition. Second, since people form
beliefs on the basis of a body of evidence, to count their belief as yet another piece of
evidence would amount to double-counting that original body of evidence. On this line of
thought, one’s belief that PP serves as something like a place-holder for the evidence upon
which one formed the belief. So, to count both the belief and the original evidence would be
to double-count the original evidence, and double-counting is not a legitimate way of
counting.
Defenders of the Equal Weight View have responded by claiming that the impropriety in
citing one’s own belief as evidence for the proposition believed can be explained in ways
that do not require that one’s belief is not in fact evidence. For instance, it could be that
conversational maxims would be violated since the fact that one believes the proposition is
already understood to be the case by the other party. Alternatively, citing one’s own belief as
evidence may exhibit hubris in a way that many would want to avoid. Finally, it seems clear
that someone else’s belief that PP can be evidence for PP, so denying that the subject’s belief
can be evidence for the subject entails a kind of relativity of evidence that some reject.
Regarding the double-counting, it has been argued that the fact that a reliable evidential
evaluator has evaluated a body of evidence to support a proposition is a new piece of
evidence, one that at least enhances the support between the body of evidence and the target
proposition. For instance, that a forensic expert evaluates the relevant forensic evidence to
support the defendant’s guilt appears to be an additional piece of evidence in favor of the
defendant’s guilt, rather than a mere repetition of that initial forensic evidence.
Finally, Defeat has been denied by appealing to epistemic permissiveness. The Equal Weight
View, and Defeat in particular, has been thought to rely on the Uniqueness Thesis.
Uniqueness Thesis: For any body of evidence, EE, and proposition, PP, EE justifies at most
one competitor doxastic attitude toward PP.
If a body of evidence can only support one doxastic attitude between belief, disbelief, and
suspension of judgment with respect to PP, and two people who share their evidence disagree
about PP, then one of them must have an unjustified attitude. So, if the Uniqueness Thesis is
true, there is a straightforward route to Defeat. However, if evidence is permissive, allowing
for multiple distinct justified attitudes toward the same proposition, then discovering that
someone has evaluated your shared evidence differently than you have need not give you any
reason to think that you have made a mistake. If evidence is permissive, then you may both
have justified responses to the shared evidence even though you disagree. So, another way
to motivate the Steadfast View is to endorse evidential permissiveness. For reasons to reject
or doubt the Uniqueness Thesis, see Ballantyne and Coffman 2011, Conee 2009, Frances
2014, Goldman 2010, Kelly 2010, Kopec 2015, Raleigh 2017, Rosen 2001, and Rosa 2012.
Defenses of the Equal Weight View either defend the Uniqueness Thesis (see Dogramici and
Horowitz 2016, Greco and Hedden 2016, Matheson 2011, White 2005, White 2013) or argue
that the Equal Weight View is not actually committed to evidential uniqueness (see
Christensen 2009, Christensen 2016, Cohen 2013, Lee 2003, Levinstein 2017, Peels and
Booth 2014, and Henderson et al 2017).
5.2.2 Denying Equal Weight
The Steadfast View has also been motivated by denying Equal Weight. If your peer’s opinion
about PP does not count for as much as your own opinion, then you may not need to make
any doxastic conciliation. While most find it implausible that your own opinion can count
for more merely because it is your own, a related and more plausible defense comes from
appealing to self-trust. Enoch (2010), Foley (2001), Pasnau (2015), Schafer (2015),
Wedgwood (2007; 2010), and Zagzebski (2012) have all appealed to self-trust in responding
to peer disagreements. Foley emphasizes the essential and ineliminable role of first-personal
reasoning. Applied to cases of disagreement, Foley claims, “I am entitled to make what I can
of the conflict using the faculties, procedures, and opinions I have confidence in, even if these
faculties, procedures, and opinions are the very ones being challenged by others” (2001, 79).
Similarly, Wedgwood asserts that it is rational to have a kind of egocentric bias—a
fundamental trust in one’s own faculties and mental states. On this account, while peer
disagreements have a kind of symmetry from the third-person perspective, neither party
occupies that perspective. Rather, each party to the disagreement has a first-person
perspective from which it is rational to privilege itself. Self-trust is fundamental and the trust
that one must place in one’s own faculties and states simply cannot be given to another.
Opponents have rejected the epistemic importance of the first-person perspective (see
Bogardus 2013b and Rattan 2014). While the first-person perspective is ineliminable, it is
not infallible. Further, there are reasons from the first-person perspective to make doxastic
conciliation. It is myevidence the supports that my interlocutor is my peer and my evidence
about what she believes which call for doxastic change. So, conciliation can be seen to be
called for from within the first-person perspective. One needn’t, and indeed cannot, abandon
one’s own perspective in dealing with disagreement. There are also worries concerning what
such an emphasis on self-trust would permit. If self-trust is relevant in cases of peer
disagreement, it is difficult to see how it is not relevant in cases of novice-expert
disagreement. However, most maintain that when the novice learns that the expert disagrees
he should make some doxastic movement if not completely defer. So, self-trust cannot be the
ultimate deciding factor in all cases of disagreement.
5.2.3 The Right Reasons View
A final motivation for the Steadfast View comes from re-evaluating the evidential support
relations in a case of peer disagreement. It will be helpful here to distinguish between two
kinds of evidence.
First-Order Evidence: First-order evidence for PP is evidence that directly pertains to PP.
Higher-Order Evidence: Higher-order evidence for PP is evidence about one’s evidence
for PP.
So, the cosmological argument, the teleological argument, and the problem of evil are all
items of first-order evidence regarding God’s existence, whereas the fact that a competent
evaluator of such evidence finds it to on balance support God’s existence is a piece of higher-
order evidence that God exists. That a competent evidential evaluator has evaluated a body
of evidence to support a proposition is evidence that the body of evidence in question does
in fact support that proposition.
Applied to cases of peer disagreement, the first-order evidence is the evidence directly
pertaining to the disputed proposition, and each peer opinion about the disputed proposition
is the higher-order evidence (it is evidence that the first-order evidence supports the
respective attitudes).
The Right Reasons View is a steadfast view of peer disagreement that emphasizes the role of
the shared first-order evidence in peer disagreements. Following Kelly (2005) we can
represent the discovery of a peer disagreement as follows:

• At tt, my body of evidence consists of EE (the original first-order evidence for PP).
• At t t′, having discovered the peer disagreement, my body of evidence consists of the
following:
i. EE (the original first-order evidence for PP).
ii. The fact that I am competent and believe PP on the basis of EE.
iii. The fact that my peer is competent and believes not-PP on the basis of EE.
According to the Right Reasons View, the two pieces of higher-order evidence (ii) and (iii)
are to be accorded equal weight. Having weighed (ii) and (iii) equally, they neutralize in my
total body of evidence at t’. However, with (ii) and (iii) neutralized, I am left with (i) and am
justified in believing what (i) supports. The Right Reasons View then notes that what I am
justified in believing at tt and what I am justified in believing at t’ is exactly the same. In
both cases what I should believe is entirely a matter of what EE supports, so what matters in
a case of peer disagreement is what the first-order evidence supports. If I believed in
accordance with my evidence at tt, then learning of the peer disagreement does nothing to
alter what I should believe about PP at t2t2. Having rightly responded to my reasons at tt,
nothing epistemically changes regarding what attitude I should have toward PP.
This argument for the Right Reasons View has been responded to in several ways. Kelly
(2010) has since rejected the argument, claiming that when a greater proportion of one’s
evidence supports suspending judgment some conciliation will be called for. Since the
higher-order evidence calls for suspending judgment regarding the disputed proposition,
there will be a conciliatory push even if the original first order evidence still plays an
important role in what attitude is justified. Others have responded to the argument by
rejecting Kelly’s original description of the case (see Matheson 2009). If my evidence
at tt includes not only the first-order evidence, but also the higher-order evidence about
myself (ii), then even if the new piece of higher-order evidence gained at t t′, (iii), cancels
out (ii) this will still call for some doxastic conciliation from tt to t t′. Alternatively, (ii) and
(iii) can be seen to together call for a suspension of judgment over whether EE supports PP.
Some have argued that a justified suspension of judgment over whether your evidence
supports PP has it that your total evidence supports a suspension of judgment toward PP (see
Feldman 2006 and Matheson 2015a). See Lasonen-Aarnio 2014 for an alternative view of
the impact of higher-order evidence.
A more recent defense of the Right Reasons View is found in Titelbaum 2015. Titelbaum
argues for the Fixed Point Thesis – that mistakes about rationality are mistakes of rationality.
In other words, it is always a rational mistake to have a false belief about rationality. So, on
this view a false belief about what attitude is rational does not ‘trickle down’ to affect the
rationality of the lower-level belief. Given this, if an individual’s initial response to the
evidence is rational, no amount of misleading higher-order evidence affects the rationality of
that belief. A correct response to the first-order evidence remains correct regardless of what
higher-order evidence is added.
A remaining problem for the Right Reasons View is its verdicts in paradigm cases of peer
disagreement. Many have the strong intuition that conciliation is the Restaurant Check Case
regardless of whether you correctly evaluated the first-order evidence.
5.3 The Justificationist View
On the spectrum of views of the epistemic significance of disagreement, the Justificationist
View lies somewhere in between the Equal Weight View and the Steadfast View. In
defending the Justificationist View, Jennifer Lackey agrees with the Equal Weight View’s
verdicts in cases like the Restaurant Check Case, but thinks that not all cases should be
handled in this way. Along these lines she gives the following:
Elementary Math. Harry and I, who have been colleagues for the past six years, were drinking
coffee at Starbucks and trying to determine how many people from our department will be
attending the upcoming APA. I, reasoning aloud, say, ‘Well, Mark and Mary are going on
Wednesday, and Sam and Stacey are going on Thursday, and since 2+2=4, there will be four
other members of our department at that conference.’ In response, Harry asserts, ‘But 2+2
does not equal 4.’ (Lackey 2010a, 283.)
In Elementary Math, Lackey finds it implausible that she should become less confident that
2+2=4, never mind to split the difference with her interlocutor and suspend judgment about
the matter. In other words, the claim is that the Equal Weight Views gives the wrong verdicts
in what we might call cases of ‘extreme disagreement’. What justifies treating Elementary
Math differently than the Restaurant Check Case? According to Lackey, if prior to
discovering the peer disagreement you are highly justified in believing the soon to be
disputed proposition, then upon discovering the peer disagreement little to no conciliation is
called for. So, since Lackey is highly justified in believing that 2+2=4 prior to talking to her
colleague, not conciliation is called for, but since Christensen was not highly justified in
believing that the shares are $43 prior to discovering the disagreement, a great deal of
conciliation is called for. According to the Justificationist View, one’s antecedent degree of
justification determines the rational response to peer disagreement. Strong antecedent
justification for believing the target proposition matters since when coupled with the
discovered disagreement you now have reasons to believe your interlocutor is not your peer
after all. In Elementary Math, Lackey should significantly revise her views about her
colleague’s epistemic position regarding elementary math. In contrast, the Restaurant Check
Case calls for no similar demotion. This difference is explained by the differing degrees of
antecedent justification.
Applied to our framework, the Justificationist View denies Independence. In cases where you
first-order evidence strongly supports believing p, this fact can be used to reassess your
interlocutor’s epistemic credentials. Independence only permitted information from ‘outside’
the disagreement to affect assessment of peerhood credentials, but here, the fact that your
interlocutor disagrees with something you are highly justified in believing give you a reason
to discount his opinion on the matter.
Lackey defends the legitimacy of such a demotion due to the existence of personal
information. In any case of peer disagreement, I will have information about myself that I
simply lack (or lack to the same extent) regarding my interlocutor. I will always be more
aware of my alertness, sincerity, open-mindedness, and so forth, than I will be of my
interlocutor. A similar claim is defended in Benjamin 2015. This asymmetry, when coupled
with my high antecedent justification for believing the disputed proposition makes it rational
to demote my alleged peer. Since in extreme disagreements one party is severely
malfunctioning, my personal information makes the best explanation of this fact that it is my
peer who is malfunctioning.
The Justificationist View has been criticized in several ways. Some object that high
antecedent justification for believing the target proposition can make the relevant difference
(see Christensen 2007, Vavova 2014a, 2014b). Consider the following case:
Lucky Lotto. You have a ticket in a million-ticket lottery. Each ticket is printed with three
six-digit numbers that, when added, yield the seven-digit number that is entered into the
lottery. Given the odds, I am highly justified in believing that your ticket is a loser, but I
nevertheless add the numbers on your ticket just for fun. Having added the numbers and
comparing the sum to the winning number – no match – I thereby become even more justified
in believing that you did not win. Meanwhile, you are adding up your numbers as well, and
comparing them to the winning number. You then exclaim ‘I won!’ (Christensen 2007, 200.)
In this case, I have very high antecedent justification for believing that your ticket is not a
winner. Nevertheless, upon hearing you exclaim that you won, the rational response is not to
downgrade your epistemic credentials. Even high antecedent justification can be defeated by
new information.
Others have agreed that personal information can act as symmetry breaker giving the subject
some reason to privilege their own view but deny that such an advantage would be had in
suitably idealized cases of peer disagreement (Matheson 2015a). The use of personal
information to discount your interlocutor’s opinion would not violate Independence, so the
defender of the Equal Weight View needn’t disagree on this score.
5.4 The Total Evidence View
Like the Justificationist View, the Total Evidence View lies somewhere between the
Steadfast View and the Equal Weight View. The Total Evidence View claims that in cases
of peer disagreement, one is justified in believing what one’s total evidence supports (Kelly
2010). While this might sound like something of a truism, central to the view is an additional
claim about the relation between first-order evidence and higher-order evidence. Let’s first
revisit the Equal Weight View. According to the Equal Weight View, in a peer disagreement
where one individual has a 0.7 degree of belief that PP and the other has a 0.3 degree of belief
that PP, both peers should split the difference and adopt a 0.5 degree of belief that PP. On
the Equal Weight View, then, the attitude that you are justified in adopting toward the
disputed proposition is entirely determined by the higher-order evidence. The justified
attitude is the mean between the two peer attitudes, which ignores what their shared first-
order evidence supports. According to the Total Evidence View, this is a mistake – the first-
order evidence must also factor in to what the peers are reasonable in believing. Such an
incorporation of the first-order evidence is what leads to the name “Total Evidence View”.
Kelly gives the following case to motivate the view:
Bootstrapping. At time t0t0, each of us has access to a substantial, fairly complicated body
of evidence. On the whole this evidence tells against hypothesis HH: given our evidence, the
uniquely rational credence for us to have in HH is 0.3. However, as it happens, both of us
badly mistake the import of this evidence: you adopt a 0.7 degree of belief toward HH while
I adopt a 0.9 degree of belief. At time t1t1, we meet and compare notes and we then split the
difference and converge on a 0.8 degree of belief. (Kelly 2010, 125–126.)
While the Equal Weight View seems to be committed to the peers being justified in adopting
the 0.8 degree of belief in HH, Kelly finds such a consequence implausible. After all, both
peers badly misjudged the first-order evidence! This argument can be seen as an argument
againstIndependence. In these cases, the disputed first-order evidence can exert an ‘upwards
epistemic push” to mitigate the impact of the higher-order evidence. Kelly takes
Independence on directly with the following case:
Holocaust Denier. I possess a great deal of evidence that the Holocaust occurred, and I judge
it to strongly support that hypothesis. Having adopted a high amount of credence that the
Holocaust occurred, I encounter an individual who denies that the Holocaust ever occurred
(because he is grossly ignorant of the evidence). (Kelly 2013b, 40)
Independence claims that my reasons for believing PP cannot be used to discount my
interlocutor’s opinion about PP. Absent those first-order reasons, however, Kelly doubts that
there is much left to work with the discount the interlocutor, and the drastic conciliation that
should result without a good reason to discount his opinion is implausible.
This motivation for the Total Evidence View has been responded to in several different ways.
One route of response is deny Kelly’s assessment of the cases (Matheson 2015a). According
to this response, the individuals in Bootstrapping were both presented with powerful, though
misleading, higher-order evidence. However, misleading evidence is evidence nevertheless.
Given this, it can be argued that the individuals still correctly responded to their total body
of evidence. For instance, we can imagine a logician working on a new proof. Suppose that
it seems to him that he has successfully completed the proof, yet he nevertheless has made a
subtle error rendering the whole thing invalid. In such a case, the logician has significantly
mis-evaluated his first-order evidence, yet he has strong higher-order evidence that he is good
at things like this. Suppose he then shows his work to a capable colleague who also maintains
that the proof is successful. In this case, it may seem that it is rational for the logician to
believe that the proof is successful, and perhaps be quite confident, even though this
conclusion is significantly different from what the first-order evidence supports. According
to this rejoinder, the call to split the difference is best seen as addressing the Belief Question.
A second route of response is to emphasize the distinction between the Response Question
and the Belief Question. According to this response, while there may be something
epistemically defective about the final doxastic states of the individuals in Bootstrapping,
they nevertheless had the rational response to the higher-order evidence (Christensen 2011).
The fact that they each misjudged the original evidence is an epistemic flaw that carries over
to their final doxastic attitude, but on this line of thinking the doxastic response that each
party made upon comparing notes was nevertheless rational. According to this rejoinder, the
call to split the difference is best seen as addressing the Response Question.
5.5 Other Issues
Other objections to the Equal Weight View are not tied to any other particular view of
disagreement, and some apply to more than just the Equal Weight View. In this section we
briefly examine some of these objections.
5.5.1 Self-Defeat
A prominent objection to the Equal Weight View and other views that prescribe doxastic
conciliation is that such views are self-defeating. For expressions of this objection, see Elga
2010, Frances 2010, O’Connor 1999, Plantinga 2000a and 2000b, Taliaferro 2009,
Weatherson 2014, and Weintraub 2013. For responses, see Bogardus 2009, Christensen
2009, Elga 2010, Graves 2013, Kornblith 2013, Littlejohn 2013, Matheson 2015b, and Pittard
2015. In brief, there is disagreement about the epistemic significance of disagreement itself,
so any view that calls for conciliation upon the discovery of disagreement can have it that it
calls for its own rejection. For instance, a defender of the Equal Weight View could become
aware of enough individuals that are suitably epistemically well-positioned on the
epistemology of disagreement that nevertheless deny that the Equal Weight View is correct.
Following the prescriptions of the Equal Weight View would require this defender to
abandon the view, and perhaps even accept a competitor account. For these reasons,
Plantinga (2000a) has claimed that such views are, ‘self-referentially inconsistent’ (522) and
Elga (2010) has claimed that such views are ‘incoherent’ and ‘self-undermining’ (179). Such
a worry seems to apply to the Equal Weight View, the Justificationist View, and the Total
Evidence View. Since all three views prescribe conciliation in at least some cases, they are
all (at least in principle) subject to such a result.
Defenders of these conciliatory views have responded in a number of ways. First, some
emphasize the way in which these views are self-defeating is not a way that shows these
views to be false, or incapable of being true. ‘No true sentences have more than 5 words’
may also be said to be self-defeating, but this is a different kind of defeat. At its worst, the
consequences here for conciliatory views is that given certain contingent circumstances they
cannot be reasonably believed, but such an inability to be reasonably believed does not
demonstrate their falsity. Further, a skeptical attitude toward the epistemic significance of
disagreement seems to fit the spirit of these views quite well (more on this below).
Another way such a consequence has been downplayed is by comparing it to other principles
that share the same result. Along these lines, Christensen gives the following:
Minimal Humility. If I have thought casually about PP for 10 minutes, and have decided it is
correct, and then find out that 1000 people, most of them much smarter and more familiar
with the relevant evidence and arguments than I, have thought long and hard about PP, and
have independently but unanimously decided that PP is false, I am not justified in
believing PP. In fact, I am justified in disbelieving PP. (2009, 763.)
The principle of Minimal Humility is quite plausible, yet there are contingent circumstances
under which it calls for its own rejection too. If such a consequence is untenable, then it
would call for the rejection of principles beyond those endorsed by the Equal Weight View,
the Justificationist View, and the Total Evidence View.
A final response argues that these principles about disagreement are themselves exempt from
their conciliatory prescriptions. So, correctly understood, these principles call for conciliation
in ordinary disagreements, but prescribe remaining steadfast in disagreements about
disagreements. So on this view, the true principles are not self-defeating. Several
philosophers have endorsed such a response to the self-defeat worry. Bogardus (2009) argues
that we can ‘just see’ that conciliatory principles are true and this prevents them from being
self-undermining. Elga (2010) argues that conciliatory views, properly understood, are self-
exempting since fundamental principles must be dogmatic about their own correctness.
Pittard (2015) argues that remaining resolute in conciliationism is no more non-deferential
than being conciliatory about conciliationism. The reasoning here is that to conciliate about
one’s conciliatory principles would be deferential about one’s belief or credence, but
steadfast about one’s reasoning. So, once we appreciate the distinct levels of belief/credence
and reasoning, either response to a disagreement about the significance of disagreement will
require being steadfast at one level. This, argues Pittard, makes remaining steadfast about
conciliationism unproblematic.
While such responses would avoid the self-defeat charge, some see it guilty of arbitrariness
(see Pittard 2015, Blessenohl 2015).
5.5.2 Formal issues
A further set of issues regarding the Equal Weight View come from considerations within
formal epistemology. Fitelson and Jelhe (2009) argue that there are difficulties in making
precise the Equal Weight View along Bayesian lines. In particular, they argue that the most
intuitive understandings of the Equal Weight View have untenable consequences. Gardiner
(2014) and Wilson (2010) each raise an objection that Equal Weight View (at least as
typically understood) violates the principle of commutativity of evidence. If we imagine an
individual encountering a number of disagreeing peers sequentially, then which doxastic
attitude is reasonable for the peer will depend upon the order at which the peers are
confronted. However, the principle of commutativity of evidence claims that the order of
evidential acquisition should not make such a difference. Lasonen-Arnio (2013) sets up a
trilemma for the Equal Weight View arguing that either (i) it violates intuitively correct
updates, (ii) it places implausible restrictions on priors, or (iii) it is non-substantive.
5.5.3 Actual Disagreement and Possible disagreement
Another issue concerns which disagreements are of epistemic significance. While actual peer
disagreement is rare, if not non-existent (see below), merely possible peer disagreement is
everywhere. For any belief you have, it is possible that an epistemic peer of yours disagrees.
Since we are fallible epistemic agents, possible peer disagreement is inevitable. One
challenge is to distinguish the epistemic significance of actual peer disagreement from the
significance of merely possible peer disagreement. Kelly (2005) first raises this challenge.
After all, whether this possible disagreeing peer actually exists is a contingent and fragile
matter, so to only care about it may be to exhibit an ‘actual world chauvinism’. (This term
comes from Carey 2011.)
Christensen (2007) responds to this challenge by noting that while merely possible
disagreement only shows that we are fallible, actual disagreement demonstrates that someone
has in fact made a mistake. Since we are already aware that we are fallible epistemic agents,
thinking about possible peer disagreements does not add any information that calls for
(further) doxastic change. In contrast, discovering an actual peer disagreement gives us
information that we lacked. In a case of peer disagreement, one of the parties has made a
mistake. While the possibility of error does not demand belief revision, an increase in the
probability of having made an error does.
A further question is whether actual peer disagreements are the only peer disagreements with
epistemic significance. For instance, suppose that you have created an argument that you find
sound in the solitude of your office. When thinking about what your (peer) colleague would
think, suppose that you reasonably conclude that she would disagree about the merits of your
argument. If such a conclusion is reasonable for you, then it seems that this fact should have
some epistemic consequences for you despite the fact that there is not (at least as of yet) any
actual disagreement. Arguably, such a merely possible disagreement even has the same
epistemic significance as an actual disagreement (see Carey & Matheson 2013). Similarly, if
an evil tyrant believes PP and then chooses to eliminate all disagreeing peers who believe
not-PP, he would not thereby become justified in his previously contentious belief (Kelly
2005). A challenge is to pick out which merely possible disagreements are epistemically
significant, since at the risk of global skepticism, clearly not all are (Barnett and Li 2017).
Issues surrounding counterfactual disagreement are also examined in Ballantyne 2013b,
Bogardus 2016, and Morgensen 2016.
5.5.4 Irrelevance of Peer Disagreement
A final issue concerns peer disagreement itself. As some have noted, epistemic peers are
extremely rare, if not non-existent (Frances 2010, 2014; King 2011; Matheson 2014). After
all, what are the odds that someone else is in precisely as good of an epistemic position as
you on some matter—and even she was, would you know it? As we have seen, there are a
number of disagreement factors, and the odds that they end in a tie between any two
individuals at any given time is quite unlikely. The paucity of peers may be taken to show
that the debate of the epistemic significance of peer disagreement is a futile exercise in
extreme hypotheticals. After all, if you have no epistemic peers that disagree with you,
doesn’t the epistemic threat from disagreement dissolve? Further, there may seem to have
been a deceptive shift in the debate. Much of the puzzle of disagreement is motivated by
messy real world cases of disagreement, but the vast majority of the literature is focused on
idealized cases of disagreement that rarely, if ever, occur.
There are several reasons to think about the significance of peer disagreement beyond its
intrinsic appeal. First, considering the idealized cases of peer disagreement helps to isolate
the epistemic significance of the disagreement itself. By controlling for other epistemic
factors, cases of peer disagreement help us focus on what epistemic effects discovered
disagreement has. While in non-idealized cases this is but one factor in determining what to
believe, the debate about peer disagreements attempts to help us better understand this one
factor. Second, while peers may be quite rare, as we have noted above, it is often not clear
which party is in the better epistemic position. For instance, while it is quite rare for two
individuals to be the exact same weight, it can often be unclear which individual weighs
more. These unknown cases may have the same epistemic significance as peer cases. If what
is needed is a positive reason to privilege one’s own view, as opposed to positive reasons to
think that the other is a peer, then unknown cases should be treated like peer cases.
In what follows we turn to examining the epistemic significance of disagreement outside of
these idealized cases of peer disagreement.
6. Disagreement By the Numbers
Many disagreements are one-on-one: one person disagrees with another person and as far as
they know they are the only two who have any opinion on the matter. Lisa thinks that she
and Marie should move in together; then Lisa discovers that Marie has the opposite opinion.
Bob and his sister Teri disagree about whether their father had an affair when they were
children. In this case they know that others have the answer—their father, for one—but for
various reasons the opinions of others are not accessible.
Many other disagreements involve just a few people. Bob, Rob, Hob, and Gob work in a
small hotel and are wondering whether to ask for raises in their hourly pay rate. After
discussion Bob thinks they should, Rob and Hob think they shouldn’t, and Gob is undecided.
When Bob learns all this about his three colleagues, what should his doxastic reaction be to
this mixed bag of agreement and disagreement?
However, when it comes to many of your beliefs, including some of the most interesting
ones, you are fully aware that millions of people disagree with you and millions of other
people agree with you. Just consider a belief about religion—just about any belief at all, pro
or con. You must have some views on controversial matters; virtually every human does.
Moreover, you’re perfectly aware that they are controversial. For the most part, it’s not as
though you believe BB, BB happens to be controversial, but you had no idea it was
controversial.
Moreover, when it comes to these controversial beliefs that large numbers of people have
taken positions on, it’s often the case that there are experts on the matter. In many cases the
experts have a definite opinion: global warming is happening and the earth is many millions
of years old. Other times they don’t: electrons and quarks come from “strings”.
If the numbers matter, then disagreement poses a skeptical threat for nearly every view of the
significance of peer disagreement. The skeptical threat for conciliatory views (the Equal
Weight View, the Justificationist View, and the Total Evidence View) is pretty
straightforward. On the Equal Weight View, since for many controversial beliefs we are not
justified in believing that the weighing of opinions favors our own opinion on the matter, the
reasons for thinking that we are mistaken outweigh our reasons for thinking we are correct.
The added resources of the Justificationist View and the Total Evidence View also do not
seem to help in resisting the skeptical conclusion. For many controversial views we lack the
strong first-order evidence and high antecedent justification that these views utilize to
mitigate the call to conciliate. Further, while appeals to personal information may be good
symmetry-breakers in cases of one-to-one disagreement, when the numbers of disagreeing
parties are much larger, the effectiveness of such appeals radically diminishes. Similar
considerations apply to most Steadfast Views. Most defenses of Steadfast Views attempt to
find a symmetry-breaker in the peer-to-peer disagreement that allow for one to privilege
one’s own belief. For instance, even if self-trust or private evidence can give one a reason to
privilege their own belief, such a symmetry-breaker is seemingly not up to the task when the
belief in question is a minority view. Given that most controversial beliefs in science,
religion, politics, and philosophy are minority views, it appears that even if many Steadfast
Views of peer disagreement are correct, they still face a skeptical challenge regarding
disagreement more generally. The notable exception here is the Right Reasons View. Since
according to the Right Reasons View, what one is justified in believing is entirely determined
by the first-order evidence, no amount of discovered disagreement would change which
controversial beliefs are rational. While the Right Reasons View, may be safe from such
skeptical concerns, such safety only comes by way of what many see as the feature that makes
it implausible. For instance, the Right Reasons View has it that you can be justified in
believing pp even when you are aware that every other peer and superior to you believes not-
pp. While this avoids the more general skeptical threat, many see this as too high a price.
Another issue concerning how the numbers matter regards the independence of the relevant
opinions. Our beliefs are shaped by a number of factors, and not all of them are epistemically
relevant. Certain religious beliefs, political beliefs, and even philosophical beliefs are
correlated with growing up in particular regions or going to certain schools. For this reason,
it may be thought that the agreement of individuals who came to their opinions on a matter
independently count for more, epistemically speaking, then agreement of individuals with a
greater shared background. For more on this issue, see Carey & Matheson 2013, Goldman
2001, and Lackey 2013b.
7. Disagreement and Skepticism
So, the phenomenon of disagreement supplies a skeptical threat: for many of our cherished
beliefs. If we aren’t sheltered, then we know that there a great deal of controversy about those
beliefs even among the people who are the smartest and have worked the hardest in trying to
figure out the truth of the matter. There is good reason to think that retaining a belief in the
face of that kind of controversy is irrational, and a belief that is irrational does not amount to
knowledge. It follows that our beliefs we recognize as controversial do not amount to
knowledge. This is the threat of disagreement skepticism (Frances 2018, 2013, 2005;
Christensen 2009; Fumerton 2010; Goldberg 2009, 2013b; Kornblith 2010, 2013;
Lammenranta 2011, 2013; Machuca 2013).
For the sake of argument, we can assume that our controversial beliefs start out epistemically
rational. Roughly put, the disagreement skeptic thinks that even if a controversial belief starts
out as rational, once one appreciates the surrounding controversy, one’s belief will no longer
be rational, and thus not an item of knowledge. The disagreement skeptic focuses on beliefs
that satisfy the following recognition-of-controversy conditions.
You know that the belief BB in question has been investigated and debated (i) for a very long
time by (ii) a great many (iii) very smart people who (iv) are your epistemic peers and
superiors on the matter and (v) have worked very hard (vi) under optimal circumstances to
figure out if BB is true. But you also know that (vii) these experts have not come to any
significant agreement on BB and (viii) those who agree with you are not, as a group, in an
appreciably better position to judge BB than those who disagree with you.
Notice that the problem does not emerge from a mere lack of consensus. Very few, if any,
beliefs are disagreement-free. Rather, the skeptical threat comes from both the extent of the
disagreement (conditions (i) and (ii)) and the nature of the disagreeing parties (conditions
(iii) – (viii)). While not every belief meets these recognition-of-controversy conditions, many
do, and among those that do are some of our most cherished beliefs.
For instance, I might have some opinion regarding the nature of free will or the moral
permissibility of capital punishment or whether God exists. I know full well that these matters
have been debated by an enormous number of really smart people for a very long time—in
some cases, for centuries. I also know that I’m no expert on any of these topics. I also know
that there are genuine experts on those topics—at least, they have thought about those
topics much longer than I have, with a great deal more awareness of relevant considerations,
etc. It’s no contest: I know I’m just an amateur compared to them. Part of being reflective is
coming to know about your comparative epistemic status on controversial subjects. That said,
being an expert in the relevant field doesn’t remove the problem either. Even if I am an expert
on free will, I am aware that there are many other such experts, that I am but one such voice
among many, and that disagreement is rampant amongst us.
The person who knows (i)–(viii) is robbed of the reasonableness of several comforting
responses to the discovery of controversy. If she is reasonable, then she realizes that she can’t
make, at least with confidence, anything like the following remarks:

• Well, the people who agree with me are smarter than the people who disagree with
me.
• We have crucial evidence they don’t have.
• We have studied the key issue a great deal more than they have.
• They are a lot more biased than we are.
This phenomenon is particularly prevalent with regard to religion, politics, morality, and
philosophy. If when it comes to debates about free will, capital punishment, affirmative
action, and many other standard controversial topics you say to yourself regarding the experts
who disagree with you ‘Those people just don’t understand the issues’, ‘They aren’t very
smart’, ‘They haven’t thought about it much’, et cetera, then you are doing so irrationally in
the sense that you should know better than to say that, at least if you’re honest with yourself
and informed of the state of the debate over free will.
However, connection between controversy and skepticism won’t apply to many of our other
beliefs. No one (or no one you know) is going around saying your parents don’t love you,
you aren’t a basically moral person, etc. So those beliefs are probably immune to any
skeptical argument of the form ‘There is long-standing disagreement among experts
regarding your belief BB; you know all about it (viz. conditions (i)–(viii)); you have no good
reason to discount the ones who disagree with you; so, you shouldn’t retain your belief BB’.
This is not to say that those beliefs escape all skeptical arguments based on human error and
related phenomena. But, the first thing to note about disagreement skepticism is that it
is contained. Only beliefs that meet something like the recognition-of-controversy conditions
are subject to this skeptical threat. Interestingly, however, it is not itself exempt from these
skeptical consequences. Such views of disagreement are themselves quite controversial, so
here too is another place where the self-defeat worry arises.
Disagreement skepticism is also contingent. The nature and extent of disagreements are both
contingent matters, so since disagreement skepticism relies on these factors, the skeptical
consequences of disagreement are also contingent. At one point in time the shape of the Earth
was quite contentious. While there is not now universal agreement that the Earth is roughly
spherical, the recognition-of-controversy conditions are no longer met on this matter.
Similarly, issues of great current controversy may too at some point fail to meet the
recognition-of-controversy conditions. So, the skeptical threat from disagreement can come
and go. That said, the track-record for the staying power of various philosophical
disagreements strongly indicates that they aren’t going anywhere anytime soon.
Finally, disagreement skepticism is exclusively epistemic. At issue here has solely been one’s
epistemic reasons for holding a belief. Meeting the recognition-of-controversy conditions
raises a problem for these reasons, but we haven’t said anything about what moral, prudential,
or even religious reasons you may have for holding a controversial belief. The skeptical threat
from disagreement only concerns our epistemic reasons. Relatedly, if there is an all-things-
considered norm of belief, disagreement skepticism may have some implications for this
norm, but only by way of addressing the epistemic reasons that one has for belief.
A related point is that these consequences are doxastic consequences. Disagreement
skepticism is about what beliefs are/are not rational and which changes in confidence are/are
not rational. Disagreement skepticism is not a view about which views should be defended
or what theses should be further researched. When coupled with the knowledge norm of
assertion or the knowledge norm of action, disagreement skepticism would have further
consequences about what claims can be asserted or acted upon, but these consequences only
follow from such a combination of views.
Bibliography
• Adams, Zed, 2013, “The Fragility of Moral Disagreement,” in Diego Machuca
(ed.), Disagreement and Skepticism, New York: Routledge, pp. 131–49.
• Anderson, Elizabeth, 2006, “The Epistemology of Democracy,” Episteme, 3: 8–22.
• Arsenault, Michael and Zachary C. Irving, 2012, “Aha! Trick Questions, Independence, and
the Epistemology of Disagreement,” Thought: A Journal of Philosophy, 1 (3): 185–194.
• Aumann, Robert J., 1976, “Agreeing to Disagree,” The Annals of Statistics, 4: 1236–1239.
• Ballantyne, Nathan, 2013b, “Counterfactual Philosophers,” Philosophy and
Phenomenological Research, 87 (2): 368–387.
• Ballantyne, Nathan, 2013a, “The Problem of Historical Variability,” in Diego Machuca
(ed.), Disagreement and Skepticism, New York: Routledge.
• Ballantyne, Nathan, and E. J. Coffman, 2011, “Uniqueness, Evidence, and
Rationality,” Philosophers Imprint, 11 (18): 1–13.
• –––, 2012, “Conciliationism and Uniqueness,” Australasian Journal of Philosophy, 90 (4):
657–670.
• Barnett, Zach and Han Li, 2016, “Conciliationism and Merely Possible
Disagreement,” Synthese, 193 (9): 2973–2985.
• Benjamin, Sherman, 2015, “Questionable Peers and Spinelessness,” Canadian Journal of
Philosophy, 45 (4): 425–444.
• Bergmann, Michael, 2009, “Rational Disagreement after Full Disclosure,” Episteme: A
Journal of Social Epistemology, 6 (3): 336–353.
• Besong, Brian, 2014, “Moral Intuitionism and Disagreement,” Synthese, 191 (12): 2767–
2789.
• Blessenohl, Simon, 2015, “Self-Exempting Conciliationism is Arbitrary,” Kriterion: Journal
of Philosophy, 29 (3): 1–22.
• Bogardus, Tomas, 2009, “A Vindication of the Equal Weight View,” Episteme: A Journal of
Social Epistemology, 6 (3): 324–335.
• Bogardus, Tomas, 2013a, “Foley’s Self-Trust and Religious Disagreement,” Logos and
Episteme, 4 (2): 217–226.
• Bogardus, Tomas, 2013b, “Disagreeing with the (Religious) Skeptic,” International Journal
for Philosophy of Religion, 74 (1): 5–17.
• Bogardus, Tomas, 2016, “Only All Naturalists Should Worry About Only One Evolutionary
Debunking Argument,” Ethics, 126 (3): 636–661.
• Boyce, Kenneth and Allan Hazlett, 2014, “Multi‐Peer Disagreement and the Preface
Paradox,” Ratio, 27 (3): 29–41.
• Bueno, Otávio, 2013, “Disagreeing with the Pyrrhonist?” in Diego Machuca
(ed.), Disagreement and Skepticism, New York: Routledge, 131–49.
• Carey, Brandon, 2011, “Possible Disagreements and Defeat,” Philosophical Studies, 155 (3):
371–381.
• Carey, Brandon and Jonathan Matheson, 2013, “How Skeptical is the Equal Weight View?”
in Diego Machuca (ed.), Disagreement and Skepticism, New York: Routledge, pp. 131–49.
• Carter, J. Adam, 2013, “Disagreement, Relativism and Doxastic Revision,” Erkenntnis, 1
(S1):1–18.
• –––, 2014, “Group Peer Disagreement,” Ratio, 27 (3): 11–28.
• Christensen, David, 2007, “Epistemology of Disagreement: The Good News,” Philosophical
Review, 116: 187–218.
• –––, 2009, “Disagreement as Evidence: The Epistemology of Controversy,” Philosophy
Compass, 4(5): 756–767.
• –––, 2010a, “Higher-Order Evidence, Philosophy and Phenomenological Research, 81 (1):
185–215.
• –––, 2010b, “Rational Reflection,” Philosophical Perspectives, 24 (1): 121–140.
• –––, 2011, “Disagreement, Question-Begging and Epistemic Self-Criticism,” Philosophers
Imprint, 11(6): 1–22.
• –––, 2016a, “Disagreement, Drugs, Etc.: From Accuracy to Akrasia,” Episteme, 13 (4): 397–
422.
• –––, 2016b, “Uniqueness and Rational Toxicity,” Noûs, 50 (3): 584–603.
• Christensen, David and Jennifer Lackey (eds.), 2013, The Epistemology of Disagreement:
New Essays, New York: Oxford University Press.
• Comesana, Juan, 2012, “Conciliation and Peer-Demotion in the Epistemology of
Disagreement,” American Philosophical Quarterly, 49 (3): 237–252.
• Conee, Earl, 1987, “Evident, but Rationally Unacceptable,” Australian Journal of
Philosophy, 65: 316–326.
• –––, 2009, “Peerage,” Episteme, 6(3): 313–323.
• –––, 2010, “Rational Disagreement Defended,” in Richard Feldman and Ted Warfield
(eds.), Disagreement, New York: Oxford University Press.
• De Cruz, Helen, forthcoming, “Religious Disagreement: An Empirical Study Among
Academic Philosophers,” Episteme.
• De Cruz, Helen and Johan De Smedt, 2013, “The Value of Epistemic Disagreement in
Scientific Practice. The Case of Homo Floresiensis,” Studies in History and Philosophy of
Science Part A, 44 (2): 169–177.
• DePaul, Michael, 2013, “Agent Centeredness, Agent Neutrality, Disagreement, and Truth
Conduciveness,” In Chris Tucker (ed.), Seemings and Justification, New York: Oxford
University Press.
• Decker, Jason, 2012, “Disagreement, Evidence, and Agnosticism,” Synthese, 187 (2): 753–
783.
• Dellsén, Finnur, forthcoming, “When Expert Disagreement Supports the
Consensus,” Australasian Journal of Philosophy.
• Dogramaci, Sinan and Sophie Horowitz, 2016, “An Argument for Uniqueness About
Evidential Support,” Philosophical Issues, 26 (1): 130–147.
• Dougherty, Trent, 2013, “Dealing with Disagreement from the First-Person Perspective: A
Probabilistic Proposal,” in D. Machuca (ed.), Disagreement and Skepticism, New York:
Routledge, pp. 218–238.
• Elga, Adam, 2007, “Reflection and Disagreement,” Noûs, 41: 478–502.
• –––, 2010, “How to Disagree About How to Disagree,” in Richard Feldman and Ted Warfield
(eds.), Disagreement, New York: Oxford University Press.
• Elgin, Catherine, 2010, “Persistent Disagreement,” in Richard Feldman and Ted Warfield
(eds.), Disagreement, New York: Oxford University Press.
• Enoch, David, 2010, “Not Just a Truthometer: Taking Oneself Seriously (but not Too
Seriously) in Cases of Peer Disagreement,” Mind, 119: 953–997.
• Everett, Theodore J., 2015, “Peer Disagreement and Two Principles of Rational
Belief,” Australasian Journal of Philosophy, 93 (2): 273–286.
• Feldman, Richard, 2003, “Plantinga on Exclusivism,” Faith and Philosophy, 20: 85–90.
• –––, 2004, “Having Evidence,” in Conee and Feldman (eds.), Evidentialism: Essays in
Epistemology, New York: Oxford University Press, pp. 219–242.
• –––, 2005, “Respecting the Evidence,” Philosophical Perspectives, 19: 95–119.
• –––, 2009, “Evidentialism, Higher-Order Evidence, and Disagreement,” Episteme, 6 (3):
294–312.
• –––, 2006a, “Epistemological Puzzles about Disagreement,” in Steve Hetherington
(ed.), Epistemic Futures, New York: Oxford University Press, pp. 216–236.
• –––, 2006b, “Reasonable Religious Disagreements,” in L. Antony (ed.), Philosophers
without Gods: Meditations on Atheism and the Secular Life, New York: Oxford University
Press.
• –––, 2006c, “Clifford’s Principle and James’ Options,” Social Epistemology, 20: 19–33.
• Feldman, Richard, and Ted Warfield (eds.), 2010, Disagreement, Oxford: Oxford University
Press.
• Frances, Bryan, 2005, “When a Skeptical Hypothesis is Live,” Noûs, 39: 559–95.
• –––, 2010a, “Disagreement,” in Duncan Pritchard and Sven Bernecker (eds.), Routledge
Companion to Epistemology, New York: Routledge Press, pp. 68–74.
• –––, 2010b, “The Reflective Epistemic Renegade,” Philosophy and Phenomenological
Research, 81: 419–463.
• –––, 2013, “Philosophical Renegades,” in Jennifer Lackey and David Christensen (eds.), The
Epistemology of Disagreement: New Essays, Oxford: Oxford University Press, pp. 121–166.
• –––, 2014, Disagreement, Cambridge, UK: Polity Press.
• –––, 2018, “Scepticism and Disagreement,” in Diego Machuca and Baron Reed
(eds.), Skepticism: From Antiquity to the Present, New York: Bloomsbury.
• Fumerton, Richard, 2010, “You Can’t Trust a Philosopher,” in Richard Feldman and Ted
Warfield (eds.), Disagreement, New York: Oxford University Press.
• Gardiner, Georgi, 2014, “The Commutativity of Evidence: A Problem for Conciliatory
Views of Disagreement,” Episteme, 11 (1): 83–95.
• Goldberg, Sanford, 2009, “Reliabilism in Philosophy,” Philosophical Studies, 124: 105–17.
• –––, 2013a, “Inclusiveness in the Face of Anticipated Disagreement,” Synthese, 190 (7):
1189–1207.
• –––, 2013b, “Defending Philosophy in the Face of Systematic Disagreement,” in Diego
Machuca (ed.), Disagreement and Skepticism, New York: Routledge, pp. 131–49.
• Goldman, Alvin, 2001, “Experts: Which Ones Should You Trust?” Philosophy and
Phenomenological Research, 63: 85–110.
• Goldman, Alvin, 2010, “Epistemic Relativism and Reasonable Disagreement,” in Richard
Feldman and Ted Warfield (eds.), Disagreement, New York: Oxford University Press.
• Graves, Shawn, 2013, “The Self-Undermining Objection in the Epistemology of
Disagreement,” Faith and Philosophy, 30 (1): 93–106.
• Greco, Daniel and Brian Hedden, 2016, “Uniqueness and Metaepistemology,” Journal of
Philosophy, 113 (8): 365–395.
• Gutting, Gary, 1982, Religious Belief and Religious Skepticism, Notre Dame: University of
Notre Dame Press.
• Hardwig, John, 1985, “Epistemic Dependence,” Journal of Philosophy, 82: 335–49.
• –––, 1991, “The Role of Trust in Knowledge,” Journal of Philosophy, 88: 693–708.
• Hawthorne, John and Amia Srinivasan, 2013, “Disagreement Without Transparency: Some
Bleak Thoughts,” in David Christensen & Jennifer Lackey (eds.), The Epistemology of
Disagreement: New Essays, New York: Oxford University Press, pp. 9–30.
• Hazlett, Alan, 2012, “Higher-Order Epistemic Attitudes and Intellectual
Humility,” Episteme, 9: 205–23.
• –––, 2014, “Entitlement and Mutually Recognized Reasonable Disagreement,” Episteme, 11
(1): 1–25.
• Henderson, David, Terrance Horgan, Matjaz Potrc, and Hannah Tierney, 2017,
“Nonconciliation in Peer Disagreement: Its Phenomenology and Its Rationality,” Grazer
Philosophische Studien, 94: 194–225.
• Heesen, Remco and Pieter van der Kolk, 2016, “A Game-Theoretic Approach to Peer
Disagreement,” Erkenntnis, 81 (6): 1345–1368.
• Jehle, David and Brandon Fitelson, 2009, “What is the “Equal Weight View”?” Episteme, 6:
280–293.
• Jones, Nicholas, 2012, “An Arrovian Impossibility Theorem for the Epistemology of
Disagreement, Logos and Episteme, 3 (1): 97–115.
• Kelly, Thomas, 2005, “The Epistemic Significance of Disagreement,” in T. Gendler and J.
Hawthorne (eds.), Oxford Studies in Epistemology, vol. 1. Oxford: Oxford University Press.
• –––, 2010, “Peer Disagreement and Higher Order Evidence,” in R. Feldman and T. Warfield
(eds.),Disagreement, New York: Oxford University Press.
• –––, 2013a, “Evidence Can be Permissive,” in M. Steup, J. Turri, and E. Sosa
(eds.), Contemporary Debates in Epistemology, New York: Blackwell.
• –––, 2013b, “Disagreement and the Burdens of Judgment,” in David Christensen and Jennifer
Lackey (eds.), The Epistemology of Disagreement: New Essays, Oxford: Oxford University
Press.
• King, Nathan, 2011, “Disagreement: What’s the Problem? Or A Good Peer is Hard to
Find,” Philosophy and Phenomenological Research, 85 (2): 249–272.
• –––, 2013, “Disagreement: The Skeptical Arguments from Peerhood and Symmetry,” In
Diego Machuca (ed.), Disagreement and Skepticism, New York: Routledge, 193–217.
• Kopec, Matthew, 2015, “A Counterexample to the Uniqueness Thesis,” Philosophia, 43 (2):
403–409.
• Kopec, Matthew and Michael G. Titelbaum, 2016, “The Uniqueness Thesis,” Philosophy
Compass, 11 (4):189–200.
• Kornblith, Hilary, 2010, “Belief in the Face of Controversy,” in Richard Feldman and Ted
Warfield (eds.), Disagreement, New York: Oxford University Press.
• –––, 2013, “Is Philosophical Knowledge Possible?” in Diego Machuca (ed.) Disagreement
and Skepticism, New York: Routledge, pp. 131–49.
• Lackey, Jennifer, 2010a, “What Should We Do When We Disagree?” in Tamar Szabo
Gendler and John Hawthorne (eds.), Oxford Studies in Epistemology, Oxford: Oxford
University Press.
• –––, 2010b, “A Justificationalist View of Disagreement’s Epistemic Significance,” in Adrian
Haddock, Alan Millar, and Duncan Pritchard (eds.), Social Epistemology, Oxford: Oxford
University Press.
• –––, 2013a, “What’s the Rational Response to Everyday Disagreements?” Philosophers’
Magazine,59: 101–6.
• –––, 2013b, “Disagreement and Belief Dependence: Why Numbers Matter,” in David
Christensen and Jennifer Lackey (eds.), The Epistemology of Disagreement: New
Essays, Oxford: Oxford University Press, pp. 243–68.
• –––, 2014, “Taking Religious Disagreement Seriously,” in Laura Frances Callahan and
Timothy O’Connor (eds.), Religious Faith and Intellectual Virtue, Oxford: Oxford
University Press, pp. 299–316.
• Lam, Barry, 2011, “On the Rationality of Belief-Invariance in Light of Peer
Disagreement,” Philosophical Review, 120 (2): 207–245.
• –––, 2013, “Calibrated Probabilities and the Epistemology of Disagreement,” Synthese, 190
(6): 1079–1098.
• Lammenranta, Markus, 2011, “Skepticism and Disagreement,” in Diego Machuca
(ed.), Pyrrhonism in Ancient, Modern, and Contemporary Philosophy, Dordrect: Springer,
pp. 203–216.
• –––, 2013, “The Role of Disagreement in Pyrrhonian and Cartesian Skepticism,” in Diego
Machuca (ed.), Skepticism and Disagreement, New York: Routledge, pp. 46–65.
• Lampert, Fabio and John Biro, 2017, “What is Evidence of Evidence Evidence of?” Logos
and Episteme, 2: 195–206.
• Lane, Melissa, 2014, “When the Experts are Uncertain: Scientific Knowledge and the Ethics
of Democratic Judgment,” Episteme, 11 (1): 97–118.
• Lasonen-Aarnio, Maria, 2013, “Disagreement and Evidential Attenuation,” Noûs, 47 (4):
767–794.
• –––, forthcoming, “Higher-Order Evidence and the Limits of Defeat,” Philosophy and
Phenomenological Research.
• Lee, Matthew Brandon, 2013, “Conciliationism Without Uniqueness,” Grazer
Philosophische Studien, 88:161–188.
• Levinstein, Benjamin Anders, 2015, “With All Due Respect: The Macro-Epistemology of
Disagreement,” Philosophers Imprint, 15 (13): 1–20.
• –––, 2017,”Permissive Rationality and Sensitivity,” Philosophy and Phenomenological
Research, 94 (2): 1–29.
• Licon, Jimmy Alfonso, 2013, “On Merely Modal Epistemic Peers: Challenging the Equal-
Weight View,” Philosophia, 41 (3): 809–823.
• List, C. and Goodin, R., 2001, “Epistemic Democracy: Generalizing the Condorcet Jury
Theorem,” Journal of Political Philosophy, 9: 277–306.
• Littlejohn, Clayton, 2013, “Disagreement and Defeat,” in Diego Machuca
(ed.), Disagreement and Skepticism, New York: Routledge, pp. 169–192.
• MacFarlane, John, 2007, “Relativism and Disagreement,” Philosophical Studies, 132: 17–
31.
• Machuca, Diego, 2015, “Agrippan Pyrrhonism and the Challenge of Disagreement,” Journal
of Philosophical Research, 40: 23–39.
• –––, 2017, “A Neo-Pyrrhonian Response to the Disagreeing about Disagreement
Argument,” Synthese, 194 (5): 1663–1680.
• Machuca, Diego (ed.), 2013, Disagreement and Skepticism, New York: Routledge.
• Martini, Carlo, 2013, “A Puzzle About Belief Updating,” Synthese, 190 (15): 3149–3160.
• Matheson, Jonathan, 2009, “Conciliatory Views of Disagreement and Higher-Order
Evidence,” Episteme: A Journal of Social Philosophy, 6 (3): 269–279.
• Matheson, Jonathan, 2011, “The Case for Rational Uniqueness,” Logos & Episteme, 2 (3):
359–73.
• Matheson, Jonathan, 2014, “Disagreement: Idealized and Everyday,” in Jonathan Matheson
and Rico Vitz (eds.), The Ethics of Belief: Individual and Social, New York: Oxford
University Press, pp. 315–330.
• Matheson, Jonathan, 2015a, “Disagreement and the Ethics of Belief,” in James Collier
(ed.), The Future of Social Epistemology: A Collective Vision, Lanham, Maryland: Rowman
and Littlefield, pp. 139–148.
• Matheson, Jonathan, 2015b, “Are Conciliatory Views of Disagreement Self-
Defeating?” Social Epistemology, 29 (2): 145–159.
• Matheson, Jonathan, 2015c, The Epistemic Significance of Disagreement, London: Palgrave
Macmillan.
• Matheson, Jonathan, 2016, “Moral Caution and the Epistemology of Disagreement,” Journal
of Social Philosophy, 47 (2): 120–141.
• Moffett, Mark, 2007, “Reasonable Disagreement and Rational Group Inquiry,” Episteme: A
Journal of Social Epistemology, 4 (3): 352–367.
• Mogensen, A. L., 2016, “Contingency Anxiety and the Epistemology of
Disagreement,” Pacific Philosophical Quarterly, 97 (4): 590–611.
• Oppy, Graham, 2010, “Disagreement,” International Journal for Philosophy of Religion, 68
(1): 183–199.
• Palmira, Michele, 2013, “A Puzzle About the Agnostic Response to Peer
Disagreement,” Philosophia, 41 (4): 1253–1261.
• Pasnau, Robert, 2015, “Disagreement and the Value of Self-Trust,” Philosophical Studies,
172 (9): 2315–2339.
• Peels, Rik and Anthony Booth, 2014, “Why Responsible Belief Is Permissible
Belief,” Analytic Philosophy, 55: 75–88.
• Pettit, Phillip, 2006, “When to Defer to the Majority – and When Not,” Analysis, 66: 179 –
187.
• Pittard, John, 2014, “Conciliationism and Religious Disagreement,” in Michael Bergmann
and Patrick Kain (eds.), Challenges to Moral and Religious Belief: Disagreement
and Evolution, Oxford University Press, pp. 80–97.
• –––, 2015, “Resolute Conciliationism,” Philosophical Quarterly, 65 (260): 442–463.
• –––, forthcoming, “Disagreement, Reliability, and Resilience,” Synthese.
• Plantinga, Alvin, 2000a, Warranted Christian Belief, Oxford: Oxford University Press.
• –––, 2000b, “Pluralism: A Defense of Religious Exclusivism,” in Philip L. Quinn and Kevin
Meeker (eds.), The Philosophical Challenge of Religious Diversity, New York: Oxford
University Press, pp. 172–192.
• Priest, Maura, 2016, “Inferior Disagreement,” Acta Analytica, 31 (3): 263–283.
• Pritchard, Duncan, 2013, “Disagreement, Skepticism, and Track-Record Arguments,”
in Disagreement and Skepticism, Diego Machuca (ed.), New York: Routledge, pp. 150–168.
• Raleigh, Thomas, 2017, “Another Argument Against Uniqueness,” Philosophical Quarterly,
67 (267): 327–346.
• Rasmussen, Mattias Skipper, Asbjørn Steglich-Petersen, and Jens Christian Bjerring,
forthcoming, “A Higher-Order Approach to Disagreement,” Episteme, first online 03 April
2017, doi: 10.1017/epi.2016.43P
• Rattan, Gurpreet, 2014, “Disagreement and the First‐Person Perspective,” Analytic
Philosophy, 55 (1): 31–53.
• Raz, Joseph, 1998, “Disagreement in Politics,” American Journal of Jurisprudence, 43: 25–
52.
• Reisner, Andrew, 2016, “Peer Disagreement, Rational Requirements, and Evidence of
Evidence as Evidence Against,” in Pedro Schmechtig and Martin Grajner (eds.), Epistemic
Reasons, Norms and Goals, De Gruyter, pp. 95–114.
• Roche, William, 2014, “Evidence of Evidence is Evidence Under Screening-Off,” Episteme,
11 (1): 119–124.
• Rosa, Luis, 2012, “Justification and the Uniqueness Thesis,” Logos and Episteme, 4: 571–
577.
• Rosen, Gideon, 2007, “The Case Against Epistemic Relativism: Reflections on Chapter 6
of Fear of Knowledge,” Episteme, 4 (1): 11–29.
• Rosen, Gideon, 2001, “Nominalism, Naturalism, and Epistemic Relativism,” Philosophical
Perspectives, 15, pp. 69–91.
• Rotondo, Andrew, 2013, “Undermining, Circularity, and Disagreement,” Synthese, 190 (3):
563–584.
• Schafer, Karl, 2015, “How Common is Peer Disagreement? On Self‐Trust and Rational
Symmetry,” Philosophy and Phenomenological Research, 91 (1): 25–46.
• Schoenfield, Miriam, 2015, “A Dilemma for Calibrationism,” Philosophy and
Phenomenological Research, 91: 425–455.
• Schoenfield, Miriam, forthcoming, “Permission to Believe,” Noûs.
• Simpson, Robert Mark, 2013, “Epistemic Peerhood and the Epistemology of
Disagreement,” Philosophical Studies, 164 (2): 561–577.
• Sosa, Ernest, 2010, “The Epistemology of Disagreement,” in Disagreement, Richard
Feldman and Ted Warfield (eds.), New York: Oxford University Press.
• Tersman, Folke, 2013, “Moral Disagreement: Actual vs. Possible,” in Diego Machuca
(ed.), Skepticism and Disagreement, New York: Routledge, pp. 90–108.
• Thune, Michael, 2010a, “Religious Belief and the Epistemology of
Disagreement,” Philosophy Compass, 5 (8): 712–724.
• –––, 2010b, “‘Partial Defeaters’ and the Epistemology of Disagreement,” Philosophical
Quarterly, 60 (239): 355–372.
• Thurow, Joshua, 2012, “Does Religious Disagreement Actually Aid the Case for Theism?”
in Jake Chandler and Victoria Harrison (eds.), Probability in the Philosophy of Religion,
Oxford: Oxford University Press.
• Titlebaum, Michael, 2015, “Rationality’s Fixed Point (Or: In Defense of Right
Reason),” Oxford Studies in Epistemology vol. 5, Oxford: Oxford University press, pp. 253–
294.
• van Inwagen, Peter, 1996, “It is Wrong, Always, Everywhere, and for Anyone, to Believe
Anything, Upon Insufficient Evidence,” in J. Jordan and D. Howard-Snyder (eds.), Faith,
Freedom, and Rationality, Hanham, MD: Rowman and Littlefield, pp. 137–154.
• Vavova, Katia, 2014a, “Moral Disagreement and Moral Skepticism,” Philosophical
Perspectives, 28 (1): 302–333.
• –––, 2014b, “Confidence, Evidence, and Disagreement,” Erkenntnis, 79 (1): 173–183.
• Weatherson, Brian, 2013, “Disagreements, Philosophical and Otherwise,” in Jennifer Lackey
and David Christensen (eds.), The Epistemology of Disagreement: New Essays, Oxford
University Press.
• Wedgwood, Ralph, 2010, “The Moral Evil Demons,” in R. Feldman and T. Warfield
(eds.), Disagreement, Oxford: Oxford University Press.
• Weber, Marc Andree, 2017, “Armchair Disagreement,” Metaphilosophy, 48 (4): 527–549.
• White, Roger, 2005, “Epistemic Permissiveness,” in J. Hawthorne
(ed.), PhilosophicalPerspectives: Epistemology, vol. 19, Malden, MA: Blackwell
Publishing, pp. 445–459.
• –––, 2007, “Epistemic Subjectivism,” Episteme: A Journal of Social Epistemology, 4 (1):
115–129.
• –––, 2009, “On Treating Oneself and Others as Thermometers,” Episteme, 6 (3): 233–250.
• –––, 2013, “Evidence Cannot be Permissive,” in M. Steup, J. Turri, and E. Sosa
(eds.), Contemporary Debates in Epistemology, New York: Blackwell.
• Wietmarschen, Han van, 2013, “Peer Disagreement, Evidence, and Well-
Groundedness,” Philosophical Review, 122 (3): 395–425.
• Wilson, Alastair, 2010, “Disagreement, Equal Weight and Commutativity,” Philosophical
Studies,149 (3): 321 – 326.
• Worsnip, Alex, 2014, “Disagreement About Disagreement? What Disagreement About
Disagreement?” Philosophers Imprint, 14 (18): 1–20.
• Zagzebski, Linda, 2012, Epistemic Authority: A Theory of Trust, Authority, and Autonomy in
Belief, New York: Oxford University Press.

You might also like