You are on page 1of 26

Exported for Jason Bao on Tue, 12 Sep 2023 21:18:44 GMT

Reason Better
An interdisciplinary guide to critical thinking

Chapter 2. Mindset

Introduction
What are the attributes of a person who reasons well? Perhaps being clever, smart, wise or rational.
Notice, though, that these are importantly different things. For example, you can probably think of
people who are very clever but not very curious—effective at persuading people of their own views but
not driven by the desire to know the truth. Being a good reasoner requires more than the kind of
- n

intelligence that helps us impress others or win arguments.


-

This idea is strongly supported by cognitive psychology. In fact, the kind of intelligence measured by IQ-

tests doesn't help much with some of the worst cognitive pitfalls, like confirmation bias. Even very clever
- - -

people can simply use their thinking skills as a kind of lawyer to justify their pre-existing views to
- -

themselves. Reasoning well requires a mindset that is eager to get things right, even if -
that means
changing our mind [1].
-
Many highly intelligent people are poor thinkers. Many people of average
intelligence are skilled thinkers. The power of a car is separate from the way
the car is driven.
—Edward de Bono

The cognitive psychologist Jonathan Baron has divided the process of good reasoning into what he
takes to be its three most important aspects: (i) confidence in proportion to the amount and quality of
thinking done; (ii) search that is thorough in proportion to the question's importance; and (iii) fairness to
other possibilities than those that we initially favor [2].

My goal in this chapter is to characterize the mindset that best promotes all three of these attributes. In
slogan form, that mindset is curious, thorough, and open:

1. Curious. The goal is for our beliefs to reflect how things really are; this is best achieved when our
confidence in a belief matches the strength of the evidence we have for it.
2. Thorough. It takes patience and e ort to push past what initially seems true and thoroughly search
for alternative possibilities and any available evidence.
3. Open. This means evaluating evidence impartially, considering weaknesses in our initial view, and
asking what we'd expect to see if alternative views were true.

We'll pay special attention to how these attributes can help us overcome confirmation bias (motivated
or not), which is arguably the most ubiquitous and harmful of the cognitive pitfalls.

Learning Objectives
By the end of this chapter, you should understand:

the importance of aiming for discovery rather than defense


what is meant by "accuracy," as it applies to binary beliefs and degrees of confidence
how confirmation bias operates at the search and evaluation stages of reasoning
the use of decoupling to overcome biased evaluation
how the bias blindspot and introspection illusion operate, leading to the biased opponent e ect
two ways of "considering the opposite" to help overcome biased evaluation
2.1 Curious
Curiosity, in our sense, is not just a matter of being interested in a topic, but of wanting to discover the
truth about it. When Aristotle wrote that "all humans by nature desire to know," he was surely right—up
to a point [3]. Much of the time, we are genuinely curious, but our minds are complex and many-layered
things. Our beliefs, as we saw in Chapter 1, are also influenced by other motives that may conflict with
our curiosity.

Defense or discovery?
Suppose I'm in a heated disagreement with Alice. She's challenging a belief in which I am emotionally
invested, so I try to find every fact that supports my case and every flaw in her points. If I think of
anything that might detract from my case, I ignore or dismiss it. When she makes a good point, I get
upset; when I make a good point, I feel victorious. Obviously, this would not be the ideal setting to avoid
cognitive pitfalls, because my reasoning is clearly motivated. If I'm very skilled at reasoning, this might
help me successfully defend my position, but it certainly won't help me reason well, because that's not
even my goal.

Think of the militaristic language we use for talking about debates: we


defend our positions and attack or shoot down or undermine our
opponent's statements. If I have a defensive mindset, any evidence
that can be used against my opponents is like a weapon to defeat
them. And any point that might challenge my side is a threat, so I
must ignore it, deflect it, or defuse it. Since my goal is to defend
myself, it doesn't matter if I make mistakes in reasoning, as long my opponent doesn't notice them!
Facts and arguments are really just tools or weapons for achieving that goal.

Now contrast the goal of defense with the goal of discovery. If I am an explorer or a scout, my goal is not
to defend or attack, but simply to get things right. My job is not to return with the most optimistic report
possible; it's to accurately report how things are, even if it's bad news. So even if I secretly hope to
discover that things are one way rather than another, I can't let that feeling muddle my thinking. I have a
clear goal—to find out how things really are [4].
Of course, it's one thing to decide abstractly that our goal is accuracy; it's another to really feel curious,
especially when we are motivated to have a particular belief. The defensiveness we feel is not directly
under our control, since it comes from System 1. The elephant requires training if it is actually going to
adopt a different mindset. This involves learning how to feel differently. As Julia Galef puts it: "If we really
want to improve our judgment... we need to learn how to feel proud instead of ashamed when we notice
we might have been wrong about something. We need to learn how to feel intrigued instead of defensive
when we encounter some information that contradicts our beliefs."

Genuine curiosity matters more than any specific reasoning skills we can acquire from academic
training. When we are really curious, we allow ourselves to follow the evidence wherever it leads without
worrying whether we were right to begin with. We should even welcome evidence that conflicts with our
beliefs, because ultimately we want to change our beliefs if they're wrong.

A man should never be ashamed to own he has been in the wrong, which is but
saying, in other words, that he is wiser today than he was yesterday.
—Alexander Pope

Finally, curiosity transforms how we interact with people who disagree with us. It's actually intriguing
when people have different views, because it might mean that they have information that we lack. And
we no longer try to find the easiest version of their view to attack—we wouldn't learn anything from that!
Instead, we want to find the most knowledgeable and reliable people who disagree, since they might
know things we don't. If they also share our goal of discovery, then discussing the issue becomes
cooperative rather than adversarial. We fill in the gaps in each other's knowledge and feel curious about
the source of any remaining disagreements.

A single slogan sums it all up: don't use evidence to prove you're right, use it to become right.

Accurate beliefs
At this point, it's worth taking a step back and reflecting on what exactly the goal of curiosity is. What
does it mean to have accurate beliefs? The simple answer is that the more accurate our beliefs, the
more closely they reflect how things actually are. So the goal is to believe things only if they match
reality—for example, to believe that the cat is on the mat only if the cat is actually on the mat—and so on
for all of our other beliefs.
A helpful analogy here is to consider the relationship between a map and the territory that it represents.
An accurate map is one that represents a road as being in a certain place only when there actually is a
road there. And the more accurate the map, the more closely its marks match the actual positions of
things in the territory. So if our goal is to draw an accurate map, we can't just draw a road on it because
we want a road there. Likewise, when we're genuinely curious, we aren't secretly hoping to arrive at a
particular belief. Rather, we want our beliefs to reflect the world the way a good map reflects its territory
[5].

One complication for our simple account of accuracy is that it treats


beliefs as though they were entirely on or off—as though the only two
options are to believe that the cat is on the mat, or to believe that the
cat is not. That simple picture of binary belief fits the map analogy
well, since maps either have a mark representing some feature, such
as a road through the hills, or do not. Most maps have noway of
indicating that there's probably or possibly a road through the hills.

But our beliefs about the world come in degrees of confidence. We might be pretty confident that
there's a road, or suspect that there's a road, or doubt that there's a road. (We sometimes express a
moderate degree of confidence in X by saying "Probably X", or "I think X, but I'm not sure.") How
confident we are makes a big difference to our decisions. For example, suppose we are thinking about
planning a road trip through the hills. If there is no road through the hills, we risk getting lost or being
unable to complete our trip. So, we need to be pretty confident that a road exists before planning or
embarking on the trip. If we are not sufficiently confident, we can gather more evidence until we are
confident enough to take action, one way or the other.

So what does it mean to be accurate with beliefs like these? Suppose I think that a road probably cuts
through the hills, but in fact there's no road. This means I'm wrong. But I'm not as wrong as I would have
been if I had been certain about the road; I was only fairly confident. Or consider the example of weather
forecasting. One weather forecaster predicts rain tomorrow with 90% confidence, while another predicts
rain tomorrow with only 60% confidence. If it doesn't rain tomorrow, there's a sense in which both are
wrong. But the lack of rain counts more strongly against the accuracy of the first forecaster than against
the second.

In short, the accuracy of a belief depends on two factors: how confidently it represents things as being a
certain way, and whether things actually are that way. The more confidence we have in true beliefs, the
better our overall accuracy. The more confidence we have in false beliefs, the worse our overall accuracy.
This conception of accuracy fits with our goals when we are genuinely
curious. When we really want to get things right, we don't allow
ourselves to feel confident in a claim unless we have sufficiently
strong evidence for it. After all, the more confident we are in
something, the more wrong we could be about it! This means we
need higher standards of evidence to support our confident beliefs.
When our goal is accuracy, we only get more confident when we gain more evidence, so that our degree
of confidence matches the strength of our evidence.

The next step is to start consciously thinking in degrees of confidence. The trick is to let go of the need to
take a side, and to start being okay with simply feeling uncertain when there isn't enough evidence to be
confident. It's also okay to just suspect that one view is correct, without the need to harden that
suspicion into an outright belief. We can take time to gather evidence and assess the strength of that
evidence. It can be somewhat liberating to realize that we don't need an outright opinion about
everything. In a world full of brash opinions, we can just let the evidence take our confidence wherever it
leads—and sometimes that's not very far.

Section Questions

2-1

In the sense used in this text, curiosity is primarily about...

A having degrees of confidence rather than binary beliefs that are entirely "on" or "o "

B having the right goal--namely, that our beliefs reflect how the world really is

C not letting ourselves be a ected by strong feelings in the midst of a disagreement

D having a high degree of interest in rare and unusual things or occurrences


2-2

According to the text, the initial "map and territory" analogy has to be adapted for degrees of confidence
because...

A maps don't make decisions, but our degree of confidence makes a big di erence to our decisions

B unlike a map, we are capable of revising our beliefs when we encounter more evidence

C marks on a map don't represent things as being probably or possibly a certain way

D we have beliefs about things that are not represented in maps, like bikes and non-existent mountains

2.2 Thorough
In general, we can divide the process of reasoning about an issue into three stages:[6]

At the search stage, we identify a range of possible views, as well as potential evidence for each of
them.
At the evaluation stage, we assess the strength of the evidence we've identified.
At the updating stage, we revise our degrees of confidence accordingly.

It can be easy to forget the first stage. Even if we're completely impartial in our evaluation of evidence for
alternative views, there may be some alternative views that haven't occurred to us, or evidence that
we've never considered. In that case, our reasoning will be incomplete and likely skewed, despite our fair
evaluation of the evidence that we do identify. We need our search to be thorough at the outset.

This means, as far as is reasonable given the importance of the issue: (i) seeking out the full range of
alternative views, and (ii) seeking out the full range of potential evidence for each view. Failure in either
one of these tasks is a cognitive pitfall known as restricted search. In this section, we'll examine these
two forms of restricted search more carefully.

Search for possibilities


Suppose we hear that a commercial jet plane recently crashed but we are not told why. There is a range
of possible explanations that we can consider about the cause of the crash. (Two that often jump to
mind are bad weather and terrorism.) Now suppose we were to guess, based on what we know about
past crashes, how likely it is that the primary cause of this crash was bad weather. Go ahead and write
down your guess.

Now set aside this initial guess and list three additional possible
explanations for the crash, aside from bad weather and terrorism.
Write these down on a scrap of paper, and don't stop at two. (Go on:
the exercise doesn't work unless you actually do this!) Now, at the
bottom of your list, write "other" to remind yourself that there are
probably even more alternatives you haven't considered. (Having
trouble? Did you mention pilot error, engine failure, electrical system failure, maintenance error, fire on
board, ground crew error, landing gear failure, loss of cabin pressure?) Now, as the final step of the
exercise, ask yourself if you want to revise your initial estimate of whether the recent crash was due to
bad weather.

In a wide variety of examples, the natural tendency is to think of just a couple explanations when not
prompted for more. As a result, we tend to be overly confident that one of the initial explanations is
correct, simply because we have not considered the full range. When prompted to generate more
alternative explanations, our probabilities are much more accurate. (Bad weather, incidentally, is
considered the primary cause in less than 10% of commercial jet crashes, by the way.)

Note that the need to search thoroughly for alternative possibilities doesn't just apply to explanations.
Whether we are considering an explanation or some other kind of claim, there will typically be a wide
range of alternative possibilities, and we often fail to consider very many. For example, suppose I am
wondering whether my favored candidate will win in a two-party race. I might only think about the two
most obvious possible outcomes, where one of the candidates straightforwardly receives the majority of
votes and then wins. But there are other possibilities as well: a candidate could bow out or become sick.
Or, in a U.S. presidential election, one candidate could win the popular vote while the other wins the
electoral college. Or, an upstart third-party candidate could spoil the vote for one side. If I don't consider
the full range of possible outcomes, I'm likely to overestimate the two most obvious ones. Let's call this
cognitive pitfall possibility freeze [7].

The simple solution—if we're considering an issue that matters—is to make the effort to brainstorm as
many alternatives as possible. (Of course, the hard part is actually noticing that we may not have
considered enough options.) And it's best to not only list alternatives but to linger on them, roll them
around in our minds, to see if they might actually be plausible. Even if we can't think of very many,
studies show that just imagining alternatives more vividly, or thinking of reasons why they could be true,
makes them seem more likely. This helps to counteract confirmation bias, since it loosens the mental
grip of the first or favored possibility [8]. (Confirmation bias, as you'll recall from the previous chapter, is
the tendency to notice or focus on potential evidence for our pre-existing views, while neglecting or
discounting any evidence to the contrary.)
Search for evidence
The second component of the search stage, once all available views have all been considered, is the
search for potential evidence that can support them. Failure to search thoroughly and fairly for evidence
is a major pitfall that hinders accuracy [9].

It's worth being clear about how we're using the term evidence in this text, because we're using it in a
slightly specialized way. Some people associate the word "evidence" only with courts and lawsuits.
Others associate "evidence" only with facts that provide strong but inconclusive support for a claim. But
we're using the term in a very broad sense. What we mean by "evidence" for a claim is anything we come
to know that supports that claim, in the sense that it should increase our degree of confidence in that
claim. (A more rigorous definition awaits us in Chapter 5.) In this sense, a piece of evidence might
provide only slight support for a claim, it might provide strong support, or it might even conclusively
establish a claim. Evidence is anything we know that supports a claim, weakly or strongly, whether in
court, in science, or in mathematics, or in everyday life.

(Two quick examples. Learning that I have two cookies gives me evidence that I have more than one. It
may sound odd to put it this way, because the evidence in this case is conclusive—but in our broad
sense, a conclusive proof still counts as evidence. At the other extreme, we may also find extremely weak
evidence. For example, in a typical case, the fact that it's cloudy out is at least some evidence that it's
going to rain. It might not be enough evidence to make us think that it will probably rain. But however
likely we thought rain was before learning that it was cloudy out, we should think that rain is at least
slightly more likely when we learn that it's cloudy. You may be doubtful about this last statement, but
after reading Chapter 5 you should understand why it's true.)

Now, back to the search for evidence. For decades, researchers have
studied how people select and process information. Overall, in
studies where people were given the opportunity to look at
information on both sides of controversial issues, they were almost
twice as likely to choose information that supported their pre-existing
attitudes and beliefs [10]. Since this is our natural tendency, restoring
balance requires deliberately searching for facts that may support alternative views, as well as
weaknesses with our own view. And that search, if it is really fair, will tend to feel like we are being far too
generous to the alternative view.
It is also crucial to seek out the strongest potential evidence for alternative views, articulated by the most
convincing sources. Encountering sources that provide only weak evidence for an opposing viewpoint
can just make us more confident in our own view [11]. It lets us feel like we've done our duty, but we end
up thinking, "If this is the sort of point the other side makes in support of their view, they must be really
wrong!" This is a mistake; after all, there are weak points and unconvincing sources on both sides of
every controversy, so we learn nothing at all by finding some and ridiculing them.

How can we get ourselves in the right mindset to seek out the best evidence for an opposing view? Let's
focus on two key questions that we can ask ourselves.

First, what would things look like if the opposing view were true?

We tend to focus on the way things would look if our beliefs were correct, and to notice when they
actually do look that way [12]. This means we often fail to notice when the evidence fits equally well with
some alternative views. It's a mistake to treat our experiences as supporting our view if in fact those
experiences are equally likely whether or not our view is true. We often miss this because we don't really
ask what we'd expect to see if other views were true.

Secondly, which observations don't fit quite right with my first or favored view?

An open search for evidence means paying special attention to facts that stick out—that is, facts that
don't seem to fit quite right with our favored hypothesis. Those are the facts that are most likely to teach
us something important, because sometimes even very well-supported theories will begin to unravel if
we pull on the threads that stick out.

Moreover, if we're honest with ourselves, these are the very facts that
we'd be focusing on if we had the opposite view! If we were motivated
to seek out problematic facts, they would jump out much more easily.
But instead, we tend to flinch away from them, or rehearse "talking
points" to ourselves about why they're unimportant. As we'll see
below, the ability to notice when we're doing this is a key skill of good
reasoning.
So, how much search for evidence is enough? The answer is not that more searching is always better:
there is such a thing as spending too much time gathering evidence. The answer should depend on the
importance of the issue, and not on whether we happen to like our current answer. Our natural tendency
is to search lazily when we are happy with the view supported by the evidence we already possess, and
to be thorough only when we hope to cast doubt on it.

In one study, subjects were asked to dip strips of test paper in their saliva to assess whether they had a
mild but negative medical condition. One group of subjects were told that the test paper would
eventually change color if they did have the condition, while the other group was told that the color
would eventually change if they did not. (In reality, the "test paper" was ordinary paper, so it did not
change color for either group.) Subjects who thought that no color change was a bad sign spent much
longer waiting for the change before deciding that the test was over, and were three times more likely to
re-test the paper (often multiple times). In other words, when the evidence seemed to be going their
way, they stopped collecting evidence. But when the evidence was not going their way, they kept
searching, in hopes of finding more support for their favored view [13].

When a search for evidence can be halted on a whim, it's called optional stopping. Scientific protocols
for experiments are designed to make sure we don't do this, but in our everyday lives we may not even
notice we're doing it. It can be a powerful way of skewing even neutral observations so that they seem to
favor our view, for the same reason that you can bias the outcome of a sequence of fair coin tosses by
choosing when to stop. Here's how: if the coin comes up heads the first time, stop. If not, keep tossing
and see if you have more heads than tails after three tosses. If not, try five! This way, you have a greater
than 50% chance of getting more heads than tails. But if you decided in advance to stop after one toss,
or three, or five, you'd have only a 50% chance that most of the tosses come up heads. In other words,
even when the method you're using for gathering evidence is completely fair (like a fair coin), you can
bias the outcome in your favor just by deciding when to stop looking.

So, when we are comfortable with our current belief, our tendency is to stop looking for more evidence.
But when we feel pressured to accept a belief that we're uncomfortable with, we often seek more
evidence in the hope of finding some that will allow us to reject the belief. Of course, whether we seek
out more evidence should not turn on whether we want our beliefs to change or to stay the same.
Instead, the amount of evidence we seek should simply match the importance of the issue.

Another tactic we use to avoid uncomfortable beliefs is refusing to believe something unless an
unreasonably high standard of evidence has been met. Meanwhile, when we want to believe something,
we apply a much more lenient standard. In other words, we systematically cheat by applying different
thresholds for what counts as "enough evidence" for believing something: we require less evidence for
things we want to believe than for things we do not.
In fact, if we are sufficiently motivated to believe something, we might even require little or no search for
evidence at all. We might be satisfied if we can tell ourselves that it "makes sense," meaning that it fits
with the most obvious bits of evidence. But this is an absurdly weak standard of support. On almost
every important issue, there are many contradictory views that "make sense." The point of seeking more
evidence is to differentiate between all the views that "make sense"—to discover things that fit far better
with one of them than with the others. The standard for forming reliable beliefs is not "can I believe
this?" or "must I believe this?" but "what does the evidence support?" If we don't properly search for
evidence, we won't have a good answer to this question.

For desired conclusions, it is as if we ask ourselves ‘Can I believe this?’, but for
unpalatable conclusions we ask, ‘Must I believe this?’
—Thomas Gilovich, How We Know What Isn't So

Note that the tactic of selectively applying a threshold for "enough evidence" really only works with a
binary conception of belief: if we think of belief as a matter of degree, there isn't a special threshold of
evidence that we have to cross in order to "adopt" a belief. We are always somewhere on a continuum of
confidence, and the evidence simply pushes us one way or another. So we can mitigate this cognitive
error by forcing ourselves to think in terms of degrees of confidence.

If we are reasoning well, our goal in seeking evidence is not to allow ourselves to believe what we want
and avoid being forced to believe anything else. Instead, the extent of our search should match the
importance of the question, and our degree of confidence in our views should rise and fall in
correspondence with the evidence we find.

Section Questions

2-3

Failing to think of su iciently many possibilities...


A leads to having over-confidence in the possibilities we do think of

B leads us to not imagine our first or favored possibility with su icient vividness

makes us almost twice as likely to choose information that supports our pre-existing attitudes and
C beliefs

D leads us to revise our estimate of the first view that occurred to us

2-4

Asking what we'd expect to observe if our first or favored view were true...

A helps balance our natural tendency to focus on how things would look if alternative views were true

B should not be the focus of our search because it's already our natural tendency

C is important because those are the facts we're likely to learn the most from

D helps us notice that di erent views can do an equally good job of explaining certain facts

2-5

Our standard for how much e ort we put into a search...

A should be that additional search for information is always better

B should be based on the importance of the issue under investigation

C should be that we search for evidence until we have enough to support our favored belief
D should be that we search for evidence until every view has equal support

2.3 Open
The third element of the right mindset for reasoning is genuine openness. This means being open not
only to evidence that supports alternative views but also to revising our own beliefs, whether they are
initial reactions or considered views. As we'll see, research indicates that being consciously open in this
way can help us to overcome confirmation bias and motivated reasoning.

Decoupling
If we encounter facts that fit with our beliefs, we immediately have a good feeling about them. We expect
them, and we take them to provide strong support for our beliefs. However, if we encounter facts that fit
better with alternative views, we tend to ignore them or consider them only weak support for those
views. As a result, it feels like we keep encountering strong evidence for our views, and weak evidence
for alternatives. Naturally, this makes us even more confident.

The problem with this process is it that it lends our initial reasons for a belief far more influence than all
subsequent information we encounter, thereby creating the evidence primacy effect that we
encountered in Chapter 1. If we listen to the prosecutor first and start believing that the defendant is
guilty, we allow that belief to color our assessment of the defendant's evidence. If we listen to the
defense lawyer first, it's the other way around. We start off in the grip of a theory, and it doesn't let go,
even in the face of further evidence to the contrary.

What this means is that confirmation bias affects us not only at the search stage of reasoning, but also at
the evaluation stage. And like other instances of confirmation bias, this can occur whether or not our
initial belief is motivated. Suppose there is an election approaching and my first inclination is to expect a
certain political party to win. Even if I don't care who wins, the outcome that seemed plausible to me at
first is the one on which I'll focus. Sources of evidence that support that view will tend to seem right,
since they agree with what I already think. And things only get worse if I really want that political party to
win. In that case, the belief that they will win is not only my first view but also my favored view, so I will
actively seek flaws in sources of evidence that might undermine it [14].

By contrast, good reasoning requires that we evaluate potential evidence on its own merits. This means
keeping the following two things separate: (1) our prior degree of confidence in a claim, and (2) the
strength of potential evidence for that claim.
This is called decoupling . Evaluating the strength of potential evidence for a claim requires that we set
aside the issue of whether the claim is actually true and ask what we'd expect to see both if it were true
and if it were not true. Being able to sustain these hypotheticals is a highly abstract and effortful activity
[15].

When someone presents what they take to be evidence for a claim,


they are giving an argument for that claim. (In this text, an argument
is a series of claims presented as support for a conclusion.) There are
many strong arguments for false conclusions, and many bad
arguments for true ones. Every interesting and controversial claim has
some proponents who defend it with irrelevant points and others
who appeal to genuine evidence. To distinguish these, we have to temporarily set aside our prior beliefs
and evaluate the arguments on their own merits. Only then can we decide whether (and to what degree)
our prior beliefs should be revised.

In short, we can't fairly assess the whole picture if we immediately discredit potential evidence
whenever it fails to support our initial view. But decoupling doesn't mean we set aside our prior beliefs
forever. When we get a piece of evidence against a belief we have, we should first decouple and assess
the strength of the evidence on its own merits. Having done that, we are in a position to weigh that new
evidence against our previous reasons for holding the belief. Some of the later chapters in this textbook
present a rigorous framework that explains exactly how this should be done.

The bias blindspot


So to counteract confirmation bias we just need to remember to evaluate potential evidence on its own
merits, right? Unfortunately, it turns out that this doesn't help much. In various studies about how
people evaluate evidence, subjects were instructed to be "as objective and unbiased as possible," to
"weigh all the evidence in a fair and impartial manner," or to think "from the vantage point of a neutral
third party." These instructions made hardly any difference: in fact, in some studies they even made
matters worse. People don't actually decouple even when they are reminded to [16].

The explanation for this is simple: we genuinely think we're already being unbiased. We think
confirmation bias happens to other people, or maybe to ourselves in other situations. Even those who
know about a cognitive bias rarely think that they are being biased right now, even when they know they
are in precisely the kinds of circumstances that usually give rise to the bias. This effect is known as the
bias blindspot [17].
It is not our biases that are our biggest stumbling block; rather it is our biased
assumption that we are immune to bias.
—Cynthia Frantz, 'I AM Being Fair'

One reason for this is that we tend to assume that bias in ourselves would be obvious—we expect it to
feel a certain way. In other words, we expect bias to be transparent in the sense of Chapter 1. But it
simply is not! Many of the cognitive processes that are biasing our evaluation of evidence are not
accessible to introspection at all [18], and others require training and practice to recognize.

We can see this in some of the follow-up interviews for the studies on confirmation bias. One famous
study involved a large group of Stanford University students who already held opinions about the
effectiveness of capital punishment in deterring murders. They were given two studies to read—one
providing evidence for each side, respectively, as well as critical responses to the studies. The students
showed a significant bias toward whichever study supported the conclusion they happened to agree
with in the first place. As we'd expect, given our knowledge of confirmation bias, they also became more
confident in their original view.

What's striking about this study is the care with which the students
examined the opposing evidence: they offered criticisms of sample
size, selection methodology, and so on. They just happened to apply
much stricter criteria to studies that threatened their initial view. Many
subjects reported trying especially hard to be completely fair and give
the other side the benefit of the doubt. It just happened that there
were glaring flaws in the research supporting the other side! Several remarked that "they never realized
before just how weak the evidence was that people on the other side were relying on for their
opinions" [19]. This "completely fair" process consistently led both sides to greater certainty that they
were right to begin with.

The point is that even motivated confirmation bias operates under the radar. We don't consciously
decide to apply selective standards. When faced with new evidence, it really seems like an honest
assessment just happens to favor the view that we already started with. In other words: at the
subconscious level we are skewing the evidence, while at the conscious level we are blithely ignorant of
doing so. This is very convenient: it lets us believe what we want, while also considering ourselves to be
fair and balanced in our assessment of the evidence.
This suggests an element of self-deception in motivated reasoning. We can't actually let ourselves notice
that we are deceiving ourselves—or we wouldn't be deceived anymore! The psychology of self-
deception shows a fascinating pattern of covering up our own tracks so as to keep our self-image intact.
For example, in a series of studies, subjects are asked to decide whether a positive outcome (a cash
bonus, an enjoyable task, etc.) will be given to a random partner or to themselves. They're given a coin
to flip if they want to, and then left alone to choose [20]. The researchers, however, had a way to tell
whether they had flipped the coin.

Unsurprisingly, the vast majority of people choose the positive outcome for themselves. What's more
surprising is that whether people flipped the coin had no effect on whether they chose the positive
outcome. In fact, people who had earlier rated themselves as most concerned about caring for others
were more likely to use the coin, but just as likely to choose the positive outcome! Since they were alone
when they flipped the coin, it seems they flipped it to convince themselves that they were being fair. If
you flip a coin and it comes up the right way, you can just continue to think you made the decision fairly,
and not think too much about how you would have responded had it come up the other way! If the coin
comes up the wrong way, you can always try "double or nothing," or "forget" which side was supposed
to be which. If all else fails, you can remind yourself that the coin could just as easily have come up the
other way, so you might as well take the positive outcome for yourself.

In other words, it seems people tossed the coin simply to preserve


their own self-image. (Of course, none of them would realize or admit
this!) A similar thing happens with motivated reasoning. If we knew
we were reasoning in a biased way, that would defeat the whole
purpose. We can only be biased in subtle ways that allow us to
maintain the self-conception of being fair and impartial. This doesn't
mean that it's impossible to catch ourselves showing little signs of
motivated reasoning—for example, a slight feeling of defensiveness here, a flinch away from opposing
evidence to the contrary there. But it takes vigilance and practice to notice these things for what they
really are.

This has important implications for how we think about people who disagree with us on an issue.
Because we expect cognitive biases to be transparent, we simply introspect to see whether we are being
biased, and it seems like we're not. The assumption that we can diagnose our own cognitive bias
through introspection is called the introspection illusion. Meanwhile, since we can't look into other
people's minds to identify a bias, we can only look at how they evaluate evidence! Since we think our
own evaluation is honest, and theirs conflicts with ours, it's natural to conclude that they're the ones
being biased. As a result, we systematically underestimate our own bias relative to theirs. Call this
phenomenon the biased opponent effect. Worse, since we assume that biases are transparent, we
assume that at some level the other side knows that they're biased: they're deliberately skewing the
evidence in their favor. This makes us think that they are engaging with us in bad faith. But, of course,
that's not what's going on in their minds at all. It really does feel to both sides like they're the ones being
fair and impartial [21].
Having learned all this about the bias blindspot, a natural response is to think: "Wow, other people sure
do have a blindspot about their various biases!" But this is exactly how we'd react if we were also subject
to a blindspot about our own bias blindspot—which we are.

Considering the opposite


As we've seen, being told to be fair and impartial doesn't help with biased evaluation. But the good
news is that there are some less preachy but more practical instructions that actually do help. In
particular, there are two related mental exercises we can go through to help reduce biased evaluation
when faced with a piece of evidence. The first is asking how we'd have reacted to the same evidence if
we had the opposite belief. The second is asking how we would have reacted to opposite evidence with
the same belief. Psychologists use the phrase considering the opposite for both strategies.

Let's consider these strategies one at a time. The first is to ask: How would I have treated this evidence if I
held the opposite view?

We can implement this by imagining that we actually hold the opposite view, and asking ourselves how
we would treat the evidence before us. This simple change of frame substantially impacts our reaction to
potential evidence, making us more aware of—and receptive to—evidence that supports the opposing
view.

For example, suppose we believe to begin with that capital


punishment makes people less likely to commit murder, so getting rid
of it would increase the murder rate. We are given a study showing
that in three recent cases where a state abolished capital punishment,
the murder rate subsequently did increase. This would feel to us like a
very plausible result—one that we would expect to be accurate—and
so we are not on the lookout for potential reasons why it might be misleading. As a result, we take the
study as strong evidence for our view.

But now ask: how would we have reacted to this evidence if we had the opposite view, namely that
capital punishment does not deter murders? The fact that the murder rate went up after abolishing
capital punishment would have felt like an unexpected or puzzling result, and we'd immediately have
begin searching for other reasons why the murder rate might have gone down. For example, perhaps
violent crime was rising everywhere and not just in those states. Or perhaps other changes in law
enforcement in those states were responsible for the increase in murders. Or perhaps the authors of the
study have cherry-picked the data and ignored states where capital punishment was abolished and the
murder rate didn't change. And now it starts to feel like the study isn't providing such strong evidence
that capital punishment deters murders.
One dramatic way to get people to consider the opposing perspective is to have them actually argue for
the opposite side in a debate-like setting. Studies using this method have found that it helps mitigate
biased evaluation and even leads some participants to actually change their minds [23]. (This is striking
because changing our minds, especially when it comes to a topic we feel strongly about, is very difficult
to do.) But it isn't necessary to engage in an actual debate from the opposing point of view. Even just
evaluating the evidence while pretending to take the opposing side can have a profound effect.

In one study, subjects had to read extensive material from a real lawsuit, with the goal of guessing how
much money the actual judge in that case awarded to the plaintiff. They were told they'd be rewarded
on the accuracy of their guesses. But they were also randomly assigned to pretend to be the plaintiff or
the defendant before reading the case materials. The result was that those who were in the mindset of
"plaintiffs" predicted a reward from the judge that was twice as large as that predicted by the
"defendants". Even though they were trying to be accurate in guessing what the judge actually awarded
the plaintiff (so they could get the reward), they couldn't help but allow their assumed roles to influence
their interpretation of the case materials as they read them [24].

A second version of this study helps show that the bias was operating on the evaluation of evidence,
rather than just as wishful thinking on behalf of their assumed roles. In the second version, subjects were
first asked to read the case materials, and only then assigned to their roles. This time, the roles made
very little difference to their predictions, indicating that the subjects were actually aiming for accuracy
and not just "taking sides". For example, taking on the role of the plaintiff after assessing the evidence
didn't cause subjects to become more favorable towards the plaintiff's case. This suggests that the bias
in the first study was affecting how subjects were assessing the evidence as they encountered it.

The good news comes from a third version of this study, in which
subjects were first assigned their roles, but were then instructed to
think carefully about the weaknesses in their side's case and actually
list them [25]. Remarkably, this simple procedure made the the
discrepancy between the "defendant" and "plaintiff" disappear.
Listing weaknesses in one's view requires employing System 2 to
examine our view carefully and critically—just as we would if we actually held the opposite view.
The second strategy is to ask: How would I have treated this evidence if it had gone the other way?

This one is a bit trickier to think about. Suppose again that we start off believing that capital punishment
deters murders, and then learn that in three states that got rid of it, the murder rate went up. The second
strategy tells us to imagine that the study had a different result: in those three states, the murder rate did
not change. Again, we would find this puzzling and unexpected and start to look for reasons why the
study might be misleading. Maybe the study authors are cherry-picking data and ignoring states where
capital punishment was abolished and the murder rate went up. Maybe the murder rate was decreasing
nationally, but did not decrease in those states because they abolished capital punishment.

In a variation of the Stanford experiment about views on capital punishment, this way of thinking had a
profound effect. In the original study, the two sides looked at the very same material and found reasons
to strengthen their own views. Even explicitly instructing the subjects to make sure they were being
objective and unbiased in their evaluations had no effect. However, a follow-up study found one method
that completely erased the students' biased evaluation: instructing them to ask themselves at every step
whether they "would have made the same high or low evaluations had exactly the same study produced
results on the other side of the issue" [22]. The simple mental act of imagining that the study was
confirming the opposite view made them look for ways in which this sort of study might be flawed.

So why does telling people to consider the opposite help, but reminding them to be fair and unbiased
does not? The answer is that people don't know how to carry out the instruction to "be unbiased" in
practice. They think it means they should feel unbiased—and they already do! But instructing people to
consider the opposite tells them how to be unbiased. After all, reasoning in an unbiased way is not a
feeling; it's a set of mental activities. The exercise of considering the opposite might feel like a trick—but
it's a trick that actually works to short-circuit the unnoticed bias in our evaluation.

Openness to revision
If it feels like we're evaluating potential evidence fairly, but somehow our favored beliefs always remain
untouched, then it's very likely that we're not really being open to alternative views. (For example, if
we've never changed our minds on any important issue, it's worth asking ourselves: what are the odds
that we happened to be right on all of these issues, all along?) After all, the only reason impartial
evaluation is useful for our goal of accuracy is that it helps us identify the strongest evidence and revise
our beliefs accordingly. But it can be hard, in the end, to push back against belief perseverance, grit our
teeth, and actually revise our beliefs.
Why is this so hard? The psychologist Robert Abelson has noted that we often speak as though our
beliefs are possessions: we talk about "holding," "accepting," "adopting," or "acquiring" views. Some
people "lose" or "give up" a belief, while others "buy into" it. To reject a claim, we might say, "I don't buy
that." Abelson also points out that we often use beliefs in much the same way that we use certain
possessions: to show off good taste or status. We can use them to signal that we are sophisticated, that
we fit in with a certain crowd, or that we are true-blue members of a social or political group.

As with possessions, too, we inherit some of our beliefs in childhood and choose other beliefs because
we like them or think people will approve of them—as long as the new beliefs don't clash too much with
those we already have. "It is something like the accumulation of furniture," Abelson remarks. This
explains our reluctance to make big changes: our beliefs are "familiar and comfortable, and a big change
would upset the whole collection" [26].

If, at some level, beliefs feel like possessions to us, then it's understandable why we get defensive when
they're criticized. And if giving up a belief feels like losing a possession, it's no wonder that we exhibit
belief perseverance. (This will be especially true if we acquired that belief for a social purpose like
signaling group membership.) If our goal is accuracy, though, then beliefs that linger in our minds with
no support shouldn't be cherished as prized possessions. They're more like bits of junk accumulating in
storage; they're only there because effort is needed to clear them out.

We sometimes have trouble letting go of old junk we've kept in our


attics and garages, even if it has no sentimental value. At some level,
we are often just averse to the idea of giving something up. If that's the
problem, then reframing the question can be useful. Rather than
asking whether we should keep an item, we can ask whether we'd
take it home if we found it for free at a flea market. If not, it has no real
value to us.

Likewise, for beliefs. Instead of asking, "Should I give up this belief," we can ask, "If this belief weren't
already in my head, would I adopt it?" Framing things this way helps counteract the sense that we'd be
losing something by letting it go.

Now remember that the language of "holding" or "giving up" beliefs suggests a binary picture of belief.
But of course, we can have a degree of confidence anywhere between certainty that a claim is false to
certainty that it's true. When we discover that an old belief has less support than we initially thought, we
may only need to gently revise of our degree of confidence in it. Loosening our grasp on a belief is easier
if we remember that the choice between "holding it" or "giving it up" is a false one.
Unfortunately, changing our minds is often perceived as kind of failure, as though it means admitting
that we should not have had the belief we gave up. But as we'll see in Chapter 5, this is a mistake. If our
goal is accuracy, the beliefs we should have at any given time are beliefs that are best-supported by our
evidence. And when we get new evidence, that often changes which beliefs are best-supported for us. So
the fact that we used to hold the most reasonable belief, given our evidence, doesn't mean that it's still
the most reasonable belief now. Revising our old beliefs in response to learning new facts is not "flip-
flopping" or being "wishy washy": it's called updating on the evidence, and it's what experts in every
serious area of inquiry must do all the time. The real failure in reasoning is not changing our minds when
we get new evidence.

Sometimes, of course, we realize that we've been holding onto a belief that was actually unreasonable
for us to accept. In that case, changing our minds really does involve admitting that we made a mistake.
But if we don't do so, we're just compounding our original error—which is especially bad if the belief has
any practical importance to our lives or those of others.

A man who has committed a mistake and doesn't correct it, is committing
another mistake.
—Confucius

Section Questions

2-6

Match each item with the e ect that it causes (not its definition). Take care to choose the best match for each
answer.

Premise Response

assuming we are being more honest than


1 introspection illusion  A
those who disagree with us

2 possibility freeze  B no change

3 pretending to take the other side C too much confidence in our first or

favored view

being reminded to avoid bias in our


4
evaluation  D finding it easier to revise our beliefs

keeping in mind that our beliefs don't


5
need to be on/o  E reduction in biased evaluation

2-7

Which best describes how confirmation bias operates at the evaluation stage of reasoning?

A when we are motivated to believe something, we construe potential evidence as favoring it

B our first or favored beliefs influence our assessment of the strength of potential evidence

we assume that people on the other side of a controversial issue are evaluating information in a biased
C way, but we are not

D we decouple our prior degree of confidence in a claim from the strength of a new piece of evidence

2-8

The text discusses studies in which people could flip a coin to make a decision in order to illustrate...

A the lengths we go to believe that we're being fair even when we're not

B that we too o en allow ourselves to be influenced by random factors like coin tosses

C that we tend to choose positive outcomes (e.g. cash bonuses) for ourselves

D that we should never use random factors like coin tosses to make fair decisions
2-9

Subjects assessing studies that provided evidence about capital punishment...

were asked to guess how much money an actual judge awarded to the plainti , and predicted a higher
A award when pretending to be the plainti

B successfully decoupled a er being instructed to be fair and impartial in their assessment of evidence

overcame biased evaluation by taking great care to examine the evidence o ered by the study that
C challenged their view

successfully decoupled a er asking what they would have thought of a study if its result had gone the
D other way

2-10

Pretending to take the opposing side of an issue...

A is a bad idea because it triggers a confirmation bias in the direction of the side we're pretending to take

B does not work as well as thinking carefully about weaknesses in our own case

C will allow us to perceive bias in ourselves through introspection

D helps counter the confirmation bias we already have in favor of our side

Key terms
Accuracy: the extent to which our beliefs reflect the way things actually are, much like a map reflects the
way a territory is. The concept of accuracy applies not only to binary beliefs but also to degrees of
confidence. For example, if the cat is not on the mat, then believing that the cat is definitely on the mat is
less accurate than believing that it's probably on the mat.
Argument: a series of claims presented as support for a conclusion.

Bias blindspot: the tendency not to recognize biases as they affect us, due to the fact that the
processes give rise to them are not transparent, even when we recognize them in others.

Biased opponent effect: a result of the introspection illusion. Given that we think our own reasoning is
unbiased, and that our opponent comes to very different conclusions, we commonly conclude that their
reasoning must be biased.

Binary belief: treating beliefs as if they are on/off. For example, we either believe that the cat is on the
mat or that the cat is not on the mat, without allowing for different degrees of confidence.

Considering the opposite: a technique to reduce biased evaluation of evidence, where we ask
ourselves either one of two questions: (i) How would I have treated this evidence had it gone the opposite
way? or (ii) How would I have treated this evidence if I held the opposite belief?

Decoupling: separating our prior degree of confidence in a claim from our assessments of the strength
of a new argument or a new piece of evidence about that claim.

Degrees of confidence: treating beliefs as having different levels of certainty. Just as we can be
absolutely sure that x is true or false, we can have every level of certainty in between, e.g. thinking that x
is slightly more likely to be true than false, very likely to be true, etc. .

Evaluation stage: the second stage in the reasoning process, when we assess the strength of the
potential evidence we’ve gathered.

Evidence: a fact is evidence for a claim if coming to know it should make us more confident in that
claim. The notion of evidence is more rigorously defined in Chapter 5.

Introspection Illusion: the misguided assumption that our own cognitive biases are transparent to us,
and as a result, and thus, that we can diagnose these biases in ourselves through introspection.

Optional stopping: allowing the search for evidence to end when convenient; this may skew the
evidence if (perhaps unbeknownst to us) we are more likely to stop looking when the evidence collected
so far supports our first or favored view
Possibility freeze: the tendency to consider only a couple of possibilities in detail, and thereby end up
overly confident that they are correct.

Restricted search: the tendency not to seek out the full range of alternative views or the full range of
evidence that favors each view. Along with biased evaluation, this is an instance of confirmation bias.

Search stage: the first stage of the reasoning process, where we identify a range of possibilities and any
evidence that may support them.

Updating on the evidence: revising our prior beliefs in response to new evidence, so that our
confidence in a belief will match its degree of support.

Updating stage: the third and final stage of the reasoning process, when we revise our degree of
confidence appropriately.

Footnotes
[1] See Stanovich, West, & Toplak (2013; 2016). In the literature, motivated confirmation bias is sometimes called "myside

bias"; however, to avoid proliferating labels, I'll just use the more informative term "motivated confirmation bias."

[2] Baron (2009), pg 200. I've re-ordered the three attributes.

[3] Aristotle, Metaphysics, Book 1. I've altered Kirwan's translation.

[4] The contrast between these two mindsets is based on the distinction between the "soldier mindset" and the "scout

mindset" in Galef (2017). For a few reasons I have changed the labels.

[5] The map/territory metaphor of the relationship between mental and linguistic representation and the world comes from

Korzybski (1958).

[6] See Baron (2009), pp. 6-12. In line with Baron's three attributes of good reasoning (pg. 200), I have split his inference stage

into evaluation of the strength of evidence and updating one's beliefs appropriately.

[7] This is a riff on Julia Galef's term "explanation freeze" is intended to apply to cases beyond explanations. For evidence of

this effect, see Kohler (1991); Dougherty et. al. (1997); Dougherty and Hunter (2003).

[8] See Carroll (1978); Gregory et al. (1982); Levi et al. (1987); Koriat et al. (1980).

[9] See for example, Haran, Ritov, & Mellers (2013).

You might also like