You are on page 1of 15

Anchoring Bias

We are all prone to being too influenced by the first piece of information we hear about a new
problem.

Anchoring is the tendency for people to be overly influenced by the first piece of information
(often a number) they receive when making estimates or decisions.

When people are primed with a number and then asked a question, for instance, their
estimate tends to be closer to the original number than the estimate given by people who
werent given that number.

That is, the anchor remains as a reference point and their response is an adjustment to the
number they were originally given.

Why does this happen?


The anchoring effect has been described as easy to demonstrate but hard to explain and may
have multiple causes.

There seems to be an element of anchoring that occurs as an automatic response to being


primed with that number - the power of suggestion; people will first consider the hypothesis
that the number they have been given is correct.

There also seems to be an element of anchoring as a more deliberate process; people adjust
away from the original number, but generally not sufficiently (especially when they are tired
or stressed).

Can we overcome this tendency?


Anchoring is difficult to avoid, even when one is aware that the original number is obviously
wrong.

This effect works even on subject-matter experts who thought they were able to ignore the
influence of the earlier number they were given.

Non-experts are usually slighter more affected by anchoring, although they are often more
conscious of the effect the anchor had on them.

To try to minimise its effects, you need to keep reminding yourself that the first piece of
information is not necessarily accurate or representative.

The anchoring effect proves difficult to overcome, though, even when test subjects have been
made aware of it and encouraged to try to compensate for it.

One way of trying to overcome its effect in negotiations is to think carefully of counter-
arguments to the anchor.
Thinking the opposite is a good strategy to help defend against anchoring.
https://youarenotsosmart.com/2010/07/27/anchoring-effect/
gandire rapida,gandire lenta- Daniel Kahneman

Availability
Some things ("plane crash!") come to mind easily, and this makes us overestimate how likely
or frequent such things are.

She heard of two planes crashing last month, so now she prefers to take the train. Thats
silly. Flying by plane isn't really riskier; its an availability bias.

When we wish to estimate the likelihood of a particular event, we tend to search our memory
for similar examples. If we can easily think of examples, we tend to assume, often wrongly,
that the likelihood of such an event is quite large.

We often fall into the trap of thinking that events that get a lot of media coverage are more
representative than they are.

A dramatic event (such as a major plane crash) will often lead us to think that planes crash
more often than they do. We can then underestimate the statistically greater risk of using other
means of transport.

Similarly, we can more easily recall personal experiences and vivid images than statistics on
the occurrence of particular events.
https://www.youtube.com/watch?time_continue=115&v=2_wkv1Gx2vM

Base Rate Neglect


Forgetting to factor into your thinking what the typical (base, background) rate would be

When considering probability, we have a tendency to forget just how common or uncommon
it is for something to occur (the base rate or the background context).

We can be distracted by specific details and other pieces of evidence and forget how likely or
unlikely something is in the first place. We forget to consider the outside view and instead
focus on the details of the question (the inside view).

It is a good idea to keep in mind the overall context (the base rate) and then adjust our
estimate up or down depending on other relevant details.

An example of Base-Rate Neglect


What is the probability that Vice President Pence will become President?

Most people would naturally focus on President Trump and the ways he might end up leaving
office before his term is up.

They would probably not think of asking how often, in general, US vice presidents become
president. But asking this question is a very good place to start:
Pences odds of becoming President are long but not prohibitive. Of his forty-seven
predecessors, nine eventually assumed the Presidency, because of a death or a resignation.
After Lyndon Johnson decided to join the ticket with John F. Kennedy, he calculated his
odds of ascension to be approximately one in four, and is said to have told Clare Boothe
Luce, Im a gambling man, darling, and this is the only chance Ive got. (From The New
Yorker)

Even before you consider the specifics of Trump, Pence and the situation in 2017, you can
make a good estimate of Pence's chances by using the base rate: 9/47

Birthday Paradox
Coincidences happen more often than we tend to think. The Birthday Paradox is a standard
illustration of this.

How many people do you need in a room before the odds of two people in the room having
the same birthday are even or better?

The mathematical answer is 23.

This is the Birthday Paradox. Technically, it is not really a paradox; it is just a very surprising
fact. It is surprising because our intuition or "System 1" thinking tends to be way off here,
estimating the probability of a joint birthday as being much lower than it really is.

The Birthday Paradox is a standard illustration of a more general point: coincidences happen
more often than we think. An event which might look like a very implausible coincidence
might be quite plausible, depending on how you frame things.

Expert reasoners tend to be familiar with the Birthday Paradox and use it as a handy reminder,
for themselves and others, of one kind of error we tend to make when estimating probabilities.
See also the excellent Better Explained page on the topic.

Blue-Green Cab Problem


Classic problem in the psychology of reasoning. Highlights our tendency to neglect base-rate
information.
The Blue-Green Cab problem is a classic of the research
literature on human judgement.

In a famous example (posed by Amos Tversky and Daniel


Kahneman):

A cab was involved in a hit-and-run accident at night. Two


cab companies, the Green and the Blue, operate in the city.
You are given the following data:

1. 85% of the cabs in the city are Green and 15% are
Blue.
2. A witness identified the cab as Blue. The court
tested the reliability of the witness under the same
circumstances that existed on the night of the
accident and concluded that the witness correctly
identified each one of the two colours 80% of the
time and failed 20% of the time.

What is the probability that the cab involved in the accident


was Blue rather than Green?

What is the most common mistake made


in solving the problem?
Many people are distracted by the 80% success rate of the
witness in identifying the colour of the cab and underestimate
the importance of the ratio of Green Cabs to Blue Cabs which
provides the basic context.

SInce most cabs in that city are Green, it raises the probability
that the witness saw a Green cab and wrongly identified it as
Blue.

This error of judgement is called base rate neglect; that is,


ignoring, or not properly accounting for, the base rate - in this
case, the information that 85% of the cabs are Green and 15%
Blue.

How to solve this type of problem


When solving problems of this sort, we need to take into
account the basic proportion of different types of cab as well
as the reliability of the witness.

A good way to solve this problem correctly is to use Bayess


Theorem, a fundamental law of probability which helps you
to revise predictions in the light of relevant evidence. It is also
known as conditional probability or inverse probability.

The theorem is quite difficult to remember and correctly


apply for most people. An easier method of applying the
theorem is to use a diagram.

You start off by showing the total number of cabs in the city
as blue and green squares in a 10 x 10 table.
(from Mind Your Decisions - The Taxi-Cab Problem)

You then calculate the proportion of cabs correctly and


incorrectly identified. If the witness is 80% accurate, 80% of
blue cars as blue and 80% of green cars are identified as
green. The remaining cars are incorrectly identified.

(from Mind Your Decisions - The Taxi-Cab Problem)

Then separate out the cars identified as blue (including blue


cars correctly identified and green cars incorrectly identified).
You end up with more cars incorrectly identified as blue and
only 41% of cars identified as blue were actually blue,
meaning a less than 50% probability that the car was blue
rather than green.
(from Mind Your Decisions - The Taxi-Cab Problem)

Even though the witness is regarded as a good witness, they


are more likely than not to be wrong. They have, however,
increased the likelihood of the car being blue to 41% (just
under three times the base rate of 15%).

More about the Blue-Green Cab Problem


The Taxi-Cab Problem (Mind Your Decisions)
Video on the Blue-Green Cab problem and base-
rate neglect
Explains how to analyse the cab problem using a
Bayes Net

Change Blindness
Failure to notice significant changes in a situation because
you are focusing your attention on some other aspect(s).

Confirmation Bias
We tend to try to confirm what we already believe. This affects how we look for, interpret,
and remember information.

When we already strongly believe something, confirmation bias is the tendency to:

Seek out information which backs up our belief;


Interpret information as supporting or compatible with our belief;
Reject information which contradicts our belief;
Remember information more easily which backs our belief.

Confirmation bias is a universal, powerful cognitive bias. It may well be the single most
pervasive and consequential bias.

An example
What happened to Kim Jong Nam?

Kim Jong Nam was the brother of the despotic leader of North Korea, Kim Jong Un. He was
poisoned in Kuala Lumpur International Airport on February 13 2017.

It might seem natural to consider "He was assassinated at the behest of Kim Jong Un, to
consolidate his own power" as your answer.

Remember, cases like this can be extremely complex, involving many moving parts. If you
went looking for evidence that would confirm your hypothesis, you would likely find it. But,
before you settle with that answer, you should ask the question "what would it take to be
false?"

By doing so, you bring into focus other elements that you might not have considered. For
instance, Kim Jong-Nam's gambling habits and somewhat hedonistic lifestyle might have
meant that he had outstanding debts with organised crime.

Your first answer might be the correct one but, by considering alternatives, you can mitigate
the effect of the confirmation bias.

More about confirmation bias


Confirmation bias: Why you make terrible life choices is a good overview, though it
focuses on everyday life rather than the kind of thinking intelligence analysts do.

Article by David McRaney


Conjunction Fallacy (Linda Problem)
Tendency to overestimate the probability of more specific things, e.g. to think that Linda is
more likely to be a feminist bank teller than she is to be a bank teller

The conjunction fallacy is the tendency to think that a complex event (a "conjunction") is
more probable than a simpler, more general event.

In psychological research, the conjunction fallacy was originally demonstrated with a now-
famous puzzle called the Linda Problem. The researchers invented a fictional woman called
Linda with various characteristics (age, personality, attitudes, education), and then asked
people how likely it was that she was a bank teller or a bank teller active in the the feminist
movement.

The researchers found that most people concentrated on their perception of her background
and attitudes and gave insufficient weight to the probability of her being part of a particular
sub-group (in this case, both a bank teller and active in the feminist movement).

In other words, the people taking the test failed to use a logical approach and judged that the
probability of two events occurring together (in "conjunction") was higher than the
probability of either one occurring on its own.
The researchers called this failure of reasoning Conjunction fallacy; when people take the
option of a less-likely conjunction of events because of their subjective assessment of what
she was likely to be doing using their preconceived ideas about her and what fit in with their
image of her.

Stephen Jay Gould (the late evolutionary biologist and historian of science) admitted to falling
into this trap:

"I am particularly fond of this example because I know that the [conjoint] statement is least
probable, yet a little homunculus in my head continues to jump up and down, shouting at
mebut she cant just be a bank teller; read the description.

When is it relevant?
This logical fallacy provides a trap for forecasters and those using their predictions when
constructing scenarios. The more detail you add to a scenario, the more coherent and plausible
it becomes. This does not mean, however, that it becomes more probable.

More about the Linda problem


Daniel Kahnemans book, Thinking Fast and Slow, has a chapter on this subject
(see pp. 156-165)

Article and podcast by David McRaney's blog "You are not so smart"

Correlation Doesn't Prove Causation


Don't just assume that an association between two things is because one causes the other

We should never assume that something which seems to be connected with a particular event
has caused that event to happen.

We need to examine carefully whether one event which is associated with another is caused
by (or has caused) the other event. A simple correlation doesnt prove anything.

It might simply be a coincidence that something else occurred around the same time as the
event we are looking at.

Both events could be linked in other ways; such as being causes of a third event.

There could be indirect causes and effects due to other factors (often referred to as
confounding variables).

When conducting controlled experiments, for instance, scientists often run the test with
different levels of a variable to see if the correlation does indeed imply causation.

Humorous examples of spurious correlations


Tyler Vigen, a Harvard student, has plotted various spurious links between randomly-selected
statistics

We should never assume that something which seems to be connected with a particular event
has caused that event to happen.

We need to examine carefully whether one event which is associated with another is caused
by (or has caused) the other event. A simple correlation doesnt prove anything.

It might simply be a coincidence that something else occurred around the same time as the
event we are looking at.

Both events could be linked in other ways; such as being causes of a third event.

There could be indirect causes and effects due to other factors (often referred to as
confounding variables).

When conducting controlled experiments, for instance, scientists often run the test with
different levels of a variable to see if the correlation does indeed imply causation.

We should also remember that there can be a causal relationship even without an association
(correlation) - "causal obfuscation".

The Classic Cartoon


by XKCD:
Related Lenses
Post Hoc Fallacy - Jumping to the conclusion that A caused B from observing that A
happened before B

ADAM - a handy checklist for helping decide whether you've got a genuine causal

Dunning-Kruger Effect
Thinking you know more than you do because you're unable to tell how little
you know because you lack the knowledge you think you have.

Real knowledge is to know the extent of ones ignorance - Confucius

The fool doth think he is wise, but the wise man knows himself to be a fool. - - Touchstone,
in As You Like It by William Shakespeare

John Cleese talking about the Dunning-Kruger Effect:

The Dunning-Kruger Effect is a type of cognitive bias whereby people who are incompetent
at something are unable to recognize their own incompetence. And not only do they fail to
recognize their incompetence, theyre also likely to feel confident that they actually are
competent.

Dunning and Kruger were inspired by a Pittsburgh man who was astonished to be so easily
caught after trying to rob a bank with his face covered in lemon juice (something he thought
would make him invisible because of its association with "invisible ink"). While watching the
CCTV footage, he still couldn't understand why his face was visible - he was not competent
enough to see the logical gaps in his thinking and plan.

As Dunning put it,

the knowledge and intelligence that are required to be good at a task are often the same
qualities needed to recognize that one is not good at that taskand if one lacks such
knowledge and intelligence, one remains ignorant that one is not good at that task

David McRaney commented in his blog "You are not so smart"


Each one of us has a relationship with our own ignorance, a dishonest, complicated
relationship, and that dishonesty keeps us sane, happy, and willing to get out of bed in the
morning. Part of that ignorance is a blind spot we each possess that obscures both our
competence and incompetence.

Training and education can help solve this problem


Some research suggests that people can start to criticize their own previous poor skills once
they are trained and can see the difference between their previous poor performance and their
new improved performance.

Related Lens
Overconfidence

More about the Dunning-Kruger Effect


Dunning and Kruger's original research paper: Unskilled and unaware of it: How
difficulties in recognizing one's own incompetence lead to inflated self-assessments
An article from Forbes Magazine: "The Dunning-Kruger Effect Shows Why Some
People Think They're Great Even When Their Work Is Terrible": The Dunning-
Kruger Effect Shows Why Some People Think They're Great Even When Their
Work Is Terrible

David McRaney's article and podcast on "Why we are unaware of how unaware we are"

False Dichotomy
"It's either my way or the highway." Wrongly assuming two options constitute the only
options.

More information about false dichotomy


David McRaney's podcast: The Black and White Fallacy

Gambler's Fallacy
Drawing a conclusion based on spurious pattern in essentially random data; e.g. in tossing
coins, thinking a tail is "due" if heads have appeared three times in a row.

Opening scene of the film of Tom Stoppards play Rosencrantz and Guildenstern are Dead

Try tossing a coin for a while. We know that the results are random but we can fall into the
trap of seeing patterns that arent there and making faulty predictions based on a common
mistake called the Gamblers Fallacy.
Unless the coin has been tampered with, the rough probability of it coming up heads or tails
is, and remains, 50/50.

If you only toss the coin a relatively small number of times, though, the results could look
very different.

If you have tossed seven heads in a row, for instance, you could draw one of the following
inferences:

Tails s somehow overdue to come up


This fits with our desire for the world to balance up, combined with our
knowledge that, over time, there would be a rough 50/50 split in the results.
Heads is somehow on a winning streak or on a roll and will keep winning.

Both ideas are wrong.

How to overcome this cognitive bias


Remember, the coin has no memory of the previous coin tosses (well, not in this universe,
anyway). Each toss of the coin is an independent event having no connection with any of the
previous or future coin tosses.

We may see them as part of a story but they are separate events with no connection between
them other than the one we are constructing in our heads. We, as humans, tend to
underestimate the role of chance in the world and look to find patterns where there may not be
any.

More about the Gamblers Fallacy


The Wikipedia entry is quite good

Also, a series of short videos on the Gamblers Fallacy:

The Gambler's Fallacy: The Basic Fallacy (1/6)

The Gambler's Fallacy: Fairness and Independence (2/6)


The Gambler's Fallacy: When is a Coin Toss Fair? (3/6)
The Gambler's Fallacy: The Physics of Coin Tosses (4/6)
The Gambler's Fallacy: Casinos and the Gambler's Ruin (5/6)

The Gambler's Fallacy: The Psychology of Gambling (6/6)

Gorillas and Basketball


The classic video demonstration of how we can fail to see what's right in front of us, if we're
paying attention to something else.

See Inattentional Blindness


Hindsight Bias
Hindsight (knowing the outcome or answer) makes things seem more obvious or predictable
than they really were

Everything makes sense in hindsight...The illusion that we understand the past fosters
overconfidence in our ability to predict the future". - Nassim Taleb

Everything makes sense in hindsight...The illusion that we understand the past


fosters overconfidence in our ability to predict the future". - Nassim Taleb

Hindsight bias is the tendency of people to overestimate their ability to have


predicted an outcome that could not possibly have been predicted. It is thus a
form of overconfidence.

Hindsight Bias in Intelligence


According to Richards Heuer in Psychology of Intelligence Analysis:

Hindsight biases influence the evaluation of intelligence reporting in three ways:

Analysts normally overestimate the accuracy of their past judgments


Intelligence consumers normally underestimate how much they learned from
intelligence reports
Overseers of intelligence production who conduct postmortem analyses of an
intelligence failure normally judge that events were more readily foreseeable than
was in fact the case.

More about Hindsight Bias


Wikipedia has a comprehensive article on hindsight bias.
Duncan Watts's article, The Hindsight Fallacy: The real reason it's so hard to predict
bubbles. Duncan Watts is the author of Everything is Obvious Once you Know the
Answer: How Common Sense Fails Us)

Nassim Taleb, The Black Swan: The Impact of the Highly Improbable, 2007

Lebada neagra. Impactul foarte putin probabilului - Editia a II-a, revizuita si adaugita
Antifragil. Ce avem de castigat de pe urma dezordinii

You might also like