You are on page 1of 11

The Trouble With Intuition - The Chronicle Review - The Chronicle of Hi... http://chronicle.

com/article/The-Trouble-With-Intuition/65674/

By Daniel J. Simons and Christopher F. Chabris


"How do I love thee? Let me count the ways." Those lines by
Elizabeth Barrett Browning, written while she was being courted by
Robert Browning, and among the most famous in all of poetry, open
one of 44 of her love poems that are collectively known as Sonnets
From the Portuguese. The Sonnets were first published in book
form in 1850, as part of the second edition of her collected poems.
At least that's what poetry scholars and bibliophiles thought for
several decades until Thomas J. Wise announced that he had
discovered a previously unknown earlier printing.

Wise was a celebrated British collector of rare books and


manuscripts in the late 19th and early 20th centuries; the catalog of
his private library filled 11 volumes. In 1885 an author named W.C.
Bennett showed Wise several copies of a 47-page, privately printed
pamphlet of the Sonnets dated 1847 and marked "not for
publication." Private printings of literature were not unusual in that
era. What was unusual was the discovery of a previously unknown
collection of such important poetry that predated the first known
public printing. Wise immediately recognized the rarity and value of
the pamphlets, and bought one for £10 (about $1,200 today). Over
the ensuing years, Wise discovered other previously unknown
collections of minor works by major authors, including some by
Alfred Tennyson, Charles Dickens, and Robert Louis Stevenson.
Collectors and libraries snapped up those volumes, and Wise's fame
and wealth grew.

At first glance, Wise's items seemed authentic, especially to a buyer


considering just one pamphlet at a time. Each one fit nicely with the
rest of its author's body of work. For example, the date of
Browning's private printing of the Sonnets corresponded with a gap
of four years between when the poems were finished and when they
were officially published. The pamphlets also appeared to be
authentic in format and typography—just how an expert would
expect them to look and feel. Although the steady stream of new
discoveries by Wise did raise isolated suspicions that something
might be amiss, the pamphlets he distributed were broadly
respected as genuine for decades.

Some 45 years after Wise found the private edition of the Sonnets,
two British book dealers, named John Carter and Graham Pollard,
decided to investigate his finds. They re-examined the Browning
volume and identified eight reasons why its existence was
inconsistent with typical practices of the era. For example, none of
the copies had been inscribed by the author, none were trimmed
and bound in the customary way, and the Brownings never
mentioned the special private printing in any letters, memoirs, or

1 of 10 6/8/2010 5:36 PM
The Trouble With Intuition - The Chronicle Review - The Chronicle of Hi... http://chronicle.com/article/The-Trouble-With-Intuition/65674/

mentioned the special private printing in any letters, memoirs, or


other documents.

The array of circumstantial evidence was impressive, but not


conclusive. Carter and Pollard found their smoking gun with
scientific analysis. First they documented the fact that all paper
used for printing before the 1860s was created from rags, straw, or a
strawlike material called esparto. Carter and Pollard then examined
one of Wise's Sonnets pamphlets under a microscope and found
that the paper was made from chemically treated wood pulp, a
technique that wasn't used in Britain until the 1870s. The 1847
edition had to be a fake. The two dealers proved that nearly half of
the other Wise pamphlets they examined were also fraudulent, and
published their findings in a 412-page book. Wise denied the
charges until he died, in 1937, but subsequent investigations
confirmed Carter and Pollard's work. Today Wise is celebrated as
one of the greatest forgers of all time.

If you have read Malcolm Gladwell's 2005 book, Blink, which is


subtitled The Power of Thinking Without Thinking, this tale might
seem familiar. Blink begins with a similar story, about an ancient
Greek statue known as a kouros, that was offered to the Getty
Museum, in Los Angeles. The curators believed the kouros to be
genuine, and, relying on scientific tests of its authenticity, they
bought it for nearly $10-million. But other art historians, upon first
viewing the statue, instantly thought that it was hinky. The former
director of the Metropolitan Museum of Art said his first reaction
was "fresh"—as in, too fresh-looking to be so old. A Greek
archaeologist "saw the statue and immediately felt cold." According
to Gladwell, those experts' intuitions proved correct, and the initial
scientific tests that authenticated the statue turned out to have been
faulty.

Gladwell uses the kouros forgery to launch his case for the
surprising power of intuitive snap judgments and instinctive gut
feelings, which he calls "rapid cognition." As he puts it, "there can
be as much value in the blink of an eye as in months of rational
analysis." Gladwell goes on to argue that rapid intuitions often
outperform rational analyses, and that excessive thinking can lead
us to mistakenly second-guess what we know in our gut to be true.
Is that conclusion merited? In the case of Wise's pamphlets, the top
experts and collectors of the time trusted in their authenticity, but
their rapid judgments were wrong, and only painstaking systematic
analysis, which integrated multiple types of information from a
variety of sources, uncovered the truth. And even for the kouros,
expert intuition was divided: The Getty's curators must have
initially thought the statue looked authentic, or they wouldn't have

2 of 10 6/8/2010 5:36 PM
The Trouble With Intuition - The Chronicle Review - The Chronicle of Hi... http://chronicle.com/article/The-Trouble-With-Intuition/65674/

initially thought the statue looked authentic, or they wouldn't have


considered buying it in the first place. In fact, some experts still
believe the kouros to be authentic, and the Getty today labels it
"Greek, about 530 B.C., or modern forgery."

Cases in which forgeries that intuitively appear real but later are
discovered through analysis to be frauds are fairly common in the
art world. Many of the master forger Han van Meegeren's paintings
hung in galleries around the world before scientific analysis showed
that they were not authentic Vermeers. Indeed, the skill of the
forger is precisely in creating works that appear at first glance, even
to experts, to be genuine, and that can be exposed as fakes only
through lengthy, expensive study. Like Wise's pamphlets, the
infamous "Hitler Diaries" were declared authentic and made public
in the 1980s before paper-testing proved that they had been created
after the end of World War II.

Gladwell's message in Blink has been interpreted by some readers


as a broad license to rely on intuition and dispense with analysis,
which can lead to flawed decisions. In his book, Too Big to Fail: The
Inside Story of How Wall Street and Washington Fought to Save
the Financial System From Crisis—and Themselves (Viking, 2009),
the New York Times journalist Andrew Ross Sorkin notes that the
former Lehman Brothers president Joseph Gregory was a devotee of
Blink who even hired Gladwell to lecture his employees "on trusting
their instincts when making difficult decisions." (Gregory was
removed from power as his firm circled the bankruptcy drain in
2008.)

Intuition means different things to different people. To some it


refers to a sudden flash of insight, or even the spiritual experience
of discovering a previously hidden truth. In its more mundane form,
intuition refers to a way of knowing and deciding that is distinct
from and complements logical analysis. The psychologist Daniel
Kahneman nicely contrasts the two: "Intuitive thinking is
perception-like, rapid, effortless. ... Deliberate thinking is
reasoning-like, critical, and analytic; it is also slow, effortful,
controlled, and rule-governed." Intuition can help us make good
decisions without expending the time and effort needed to calculate
the optimal decision, but shortcuts sometimes lead to dead ends.
Kahneman received the Nobel Memorial Prize in Economic Science
in 2002 for his work with the late Amos Tversky that showed how
people often rely on intuitive heuristics (rules of thumb) rather than
rational analysis, and how those mental shortcuts often lead us to
make decisions that are systematically biased and suboptimal.

Gerd Gigerenzer, director of the Max Planck Institute for Human


Development and author of Gut Feelings: The Intelligence of the

3 of 10 6/8/2010 5:36 PM
The Trouble With Intuition - The Chronicle Review - The Chronicle of Hi... http://chronicle.com/article/The-Trouble-With-Intuition/65674/

Development and author of Gut Feelings: The Intelligence of the


Unconscious (Viking, 2007), takes a more benign view of intuition:
Intuitive heuristics are often well adapted to the environments in
which the human mind evolved, and they yield surprisingly good
results even in the modern world. For example, he argues, choosing
to invest in companies based on whether you recognize their names
can produce reasonably good returns. The same holds for picking
which tennis player is likely to win a match. Recognition is a prime
example of intuitive, rapid, effortless cognition. Gigerenzer's book
jacket describes his research as a "major source for Malcolm
Gladwell's Blink," but the popular veneration of intuitive decision-
making that sprang from Blink and similar works lacks the nuance
of Gigerenzer's claims or those of other experimental psychologists
who have studied the strengths and limits of intuition.

The idea that hunches can outperform reason is neither unique nor
original to Malcolm Gladwell, of course. Most students and
professors have long believed that, when in doubt, test-takers
should stick with their first answers and "go with their gut." But
data show that test-takers are more than twice as likely to change an
incorrect answer to a correct one than vice versa.

Intuition does have its uses, but it should not be exalted above
analysis. Intuition can't be beat when we are deciding which ice
cream we like more, which songs are catchier, which politician is
most charismatic. The essence of those examples is the absence of
any objective standard of quality—there's no method of analysis that
will decisively determine which supermodel is more attractive or
which orchestra audition was superior. The key to successful
decision making is knowing when to trust your intuition and when
to be wary of it. And that's a message that has been drowned out in
the recent celebration of intuition, gut feelings, and rapid cognition.

There is, moreover, one class of intuitions that consistently leads us


astray—dangerously astray. These intuitions are stubbornly
resistant to analysis, and it is exactly these intuitions that we
shouldn't trust. Unfortunately, they are also the intuitions that we
find the most compelling: mistaken intuitions about how our own
minds work.

We met in the late 1990s at Harvard University, where Dan was a


new psychology professor and Chris was a graduate student. As part
of an undergraduate laboratory course Dan was teaching, we
decided to re-examine some landmark studies the cognitive
psychologist Ulric Neisser conducted in the 1970s. In one of those
experiments, observers counted the number of times a group of
three people wearing white shirts passed a basketball to one another
while ignoring three people wearing black shirts who were also

4 of 10 6/8/2010 5:36 PM
The Trouble With Intuition - The Chronicle Review - The Chronicle of Hi... http://chronicle.com/article/The-Trouble-With-Intuition/65674/

passing a ball. In the middle of the video, a woman carrying an open


umbrella walked through the scene. Surprisingly, many of the
observers didn't notice her. Some psychologists assumed that this
failure was a side effect of the unusual video displays Neisser
used—the players and the umbrella woman were all partially
transparent and looked ghostly, making them somewhat harder to
see. As a class project, we decided to test whether people could miss
something that was opaque and fully visible.

We filmed the basketball-passing game with a single camera and,


like Neisser, we had a female research assistant stroll through the
game with an open umbrella. We also made a version in which we
replaced the umbrella woman with a woman in a full-body gorilla
suit, even having her stop in the middle of the game, turn toward
the camera, thump her chest, and exit on the other side of the
display nine seconds later. People might miss a woman, we thought,
but they would definitely see a gorilla.

We were wrong. Fifty percent of the subjects in our study failed to


notice the gorilla! Later research by others, with equipment that
tracks subjects' eye movements, showed that people can miss the
gorilla even when they look right at it. We were stunned, and so
were the subjects themselves. When they viewed the video a second
time without counting the passes, they often expressed shock: "I
missed that?!" A few even accused us of sneakily replacing the "first
tape" with a "second tape" that had a gorilla added in.

The finding that people fail to notice unexpected events when their
attention is otherwise engaged is interesting. What is doubly
intriguing is the mismatch between what we notice and what we
think we will notice. In a separate study, Daniel Levin, of Vanderbilt
University, and Bonnie Angelone, of Rowan University, read
subjects a brief description of the gorilla experiment and asked
them whether they would see the gorilla. Ninety percent said yes.
Intuition told those research subjects (and us) that unexpected and
distinctive events should draw attention, but our gorilla experiment
revealed that intuition to be wrong. There are many cases in which
this type of intuition—a strong belief about how our own minds
work—can be consistently, persistently, and even dangerously
wrong.

The existence of this class of faulty intuitions would just be an


academic curiosity if it did not have such significant practical
consequences. If you believe you will notice unexpected events
regardless of how much of your attention is devoted to other tasks,
you won't be vigilant enough for possible risks. Consider talking or
texting on a cellphone while driving. Most people who do this

5 of 10 6/8/2010 5:36 PM
The Trouble With Intuition - The Chronicle Review - The Chronicle of Hi... http://chronicle.com/article/The-Trouble-With-Intuition/65674/

believe, or act as though they believe, that as long as they keep their
eyes on the road, they will notice anything important that happens,
like a car suddenly braking or a child chasing a ball into the street.
Cellphones, however, impair our driving not because holding one
takes a hand off the wheel, but because holding a conversation with
someone we can't see—and often can't even hear well—uses up a
considerable amount of our finite capacity for paying attention.

Flawed intuitions about the mind extend to virtually every other


domain of cognition. Consider eyewitness memory. In the vast
majority of cases in which DNA evidence exonerated a death-row
inmate, the original conviction was based largely on the testimony
of a confident eyewitness with a vivid memory of the crime. Jurors
(and everyone else) tend to intuitively trust that when people are
certain, they are likely to be right. Almost all of us have precise
memories of how we heard about the attacks of 9/11 or, if we're old
enough, the Challenger explosion or President John F. Kennedy's
assassination. But you should not be certain that your detailed
memories of those events are accurate. Study after study has shown
that memories of important events like those are no more accurate
than run-of-the-mill memories. They are more vivid, and we are
therefore more confident about their accuracy, but that confidence
is largely an illusion.

Other intuitions about the mind's workings fail in the same way. For
example, it's easy to fall prey to the belief that you understand
complex systems better than you really do. This instinct played a
role in the financial crisis, especially among investors who bought
newfangled mortgage-related bonds whose risks they did not truly
appreciate.

The most troublesome aspect of intuition may be the misleading


role it plays in how we perceive patterns and identify causal
relationships. When two events occur in close temporal proximity,
and the first one plausibly could have caused the second one, we
tend to infer that this is what must have happened. A tendency to
jump to that conclusion is not a bad "default setting" for the human
mind, especially in light of the circumstances in which it evolved. In
a nonindustrialized society, with no computers, Internet, Google, or
even public libraries to access information, the only ways to infer
cause and effect were personal experiences and the stories told by
others. If a friend ate berries from a particular bush and soon
became sick, you might wisely avoid those berries yourself. But your
friend's illness might have had nothing to do with the berries.

To determine whether two events are truly associated, we must


consider how frequently each one occurs by itself, and how

6 of 10 6/8/2010 5:36 PM
The Trouble With Intuition - The Chronicle Review - The Chronicle of Hi... http://chronicle.com/article/The-Trouble-With-Intuition/65674/

frequently they occur together. With just one or a few anecdotes,


that's impossible, so it pays to err on the side of caution when
inferring the existence of an association from a small number of
examples. Verifying the existence of a genuine association becomes
trivial, though, when we can rely on the accumulated experience of
hundreds, thousands, or even millions of people. We can decide
which car to buy based on the compiled ratings in Consumer
Reports rather than on the rantings of a disgruntled owner who
happens to be a cousin (or on the manufacturer's slick ad
campaign). We can rely on accumulated data, but too often we
don't. Why not? Because our intuitions respond to vivid stories, not
abstract statistics.

Imagine that your 2-year-old child is diagnosed with an ear


infection. Your pediatrician prescribes an antibiotic and, within 48
hours, your child feels better and the infection is gone. Did the
antibiotic work? There is no evidence that it did. The infection
might have resolved on its own without antibiotics. The first step in
demonstrating the efficacy of a drug is to see whether taking it leads
to greater improvements than not taking it. To do that, you would
first need to show that improvement rates are higher for people who
receive the drug than for those who do not. Showing that
association is a necessary first step, but it still does not show that
the drug caused the improvement. A crucial second step is to
randomly assign some patients to receive the antibiotic for their ear
infections and others to receive a placebo. Only if the antibiotic
group healed faster than the placebo group could you conclude that
the antibiotic caused the improvement.

Compared with epidemiological studies and clinical trials,


anecdotes—with their lack of control groups—look downright
pitiful. Yet we rely on anecdotal causal reasoning all the time,
without even realizing the giant leaps of logic we are making. In a
recent issue of The New Yorker, John Cassidy writes about U.S.
Treasury Secretary Timothy Geithner's efforts to combat the
financial crisis. "It is inarguable," writes Cassidy, "that Geithner's
stabilization plan has proved more effective than many observers
expected, this one included." It's easy for even a highly educated
reader to pass over a sentence like that one and miss its unjustified
inference about causation. The problem lies with the word
"effective." How do we know what effect Geithner's plan had?
History gives us a sample size of only one—in essence, a very long
anecdote. We know what financial conditions were before the plan
and what they are now (in each case, only to the extent that we can
measure them reliably—another pitfall in assessing causality), but
how do we know that things wouldn't have improved on their own

7 of 10 6/8/2010 5:36 PM
The Trouble With Intuition - The Chronicle Review - The Chronicle of Hi... http://chronicle.com/article/The-Trouble-With-Intuition/65674/

had the plan never been adopted? Perhaps they would have
improved even more without Geithner's intervention, or much less.
The "data" are consistent with all of those possibilities, but Cassidy
and most of his readers are drawn to the most intuitive conclusion:
that Geithner's 2009 plan caused the improvements seen in 2010.

We are not naïvely arguing that people should trust only


double-blind studies with random assignment when inferring
cause. If a man points a loaded gun at us, we don't doubt the
outcome of his pulling the trigger, and we won't wait to be shown a
peer-reviewed journal article about the appropriate controlled
experiment before we start running. There is a plausible and
well-established mechanism by which bullets fired from a gun kill
people. In simple situations like that, we can safely generalize from
a set of principles (Newtonian mechanics) that are well understood
and that do involve causal tests that long ago proved the principles
correct. In the Geithner example, though, and in many, many other
situations, there is no simple analogy with well-understood causal
relationships. For complex systems like the global economy, human
physiology, or the human mind itself, inferring cause from single
examples is not logically justified, because we do not have a
complete enough understanding of the internal workings of the
system.

Take the case of the perceived link between childhood vaccinations


and autism. Nowadays children receive several vaccines before age
2, and autism is often diagnosed in 2- and 3-year-olds. When a child
is diagnosed with autism, parents naturally and understandably
seek possible causes. Vaccination involves the introduction of
unusual foreign substances (dead viruses, attenuated live viruses,
and preservative chemicals) into the body, so it's easy to imagine
that those things could profoundly affect a child's behavior. But
more than a dozen large-scale epidemiological studies, involving
hundreds of thousands of subjects, have shown that children who
were vaccinated are no more likely to be diagnosed with autism
than are children who were not vaccinated. In other words, there is
no association between vaccination and autism. And in the absence
of an association, there cannot be a causal link.

Many people who believe that vaccination can cause autism are
aware of those data. But the intuitive cause-detector in our minds is
driven by stories, not statistics, and once a compelling story leads us
to ascribe an effect to a cause, we can hold to that belief as
stubbornly as when we trust in our ability to talk on a phone while
driving—or to spot a person wearing a gorilla suit. In a way,
intuition and statistics are like oil and water: They can easily coexist
in our minds without ever interacting. That's one reason some in

8 of 10 6/8/2010 5:36 PM
The Trouble With Intuition - The Chronicle Review - The Chronicle of Hi... http://chronicle.com/article/The-Trouble-With-Intuition/65674/

in our minds without ever interacting. That's one reason some in


the media continue to treat the vaccine-autism link as a
"controversy"—the emotional stories of parents have a constant tug
on our beliefs because their effects can't be wiped away by knowing
the statistics, no matter how solid they are.

Malcolm Gladwell is regarded as an exceptional science writer in


part because of the effective way he uses stories. But it's not just
that Gladwell is a better storyteller than his peers. He deploys his
stories—anecdotes, really—as part of a compelling rhetorical
strategy. Gladwell surrounds his arguments with examples that
suggest an association, letting his readers infer the causal
relationships he wants to convey. In Blink, he begins his argument
with a case in which intuition revealed the kouros fraud, and
readers conclude for themselves that putting more trust in their
intuition can make them better thinkers. Indeed, experiments have
shown that the more mental work readers have to do to infer a cause
from a set of facts, the more memorable the causal inference will be.
Gladwell, like most good writers, is a master of letting readers
"discover" his argument rather than hitting them over the head with
it.

There is nothing wrong with using Gladwell's rhetorical technique,


as long as the examples are truly illustrative of a valid causal
relationship. Charities do this when they highlight the plight of a
single individual, with a name and a face, rather than the numerical
magnitude of a problem: The stories bring in more money than the
statistics. The danger comes from the fact that we promiscuously
infer cause from such positive anecdotes in the absence of proper
evidence, or even in the face of contradictory evidence. Most people
aren't inveterate skeptics vigilantly testing each anecdote to make
sure it is representative of an overall pattern.

The actress Jenny McCarthy has used her celebrity to promote


proposed cures for autism, such as a special diet she designed for
her own autistic son. She often talks about the thousands of parents
who have let her know that her regimen helped their children.
McCarthy believes, and wants her audience to believe, that those
parents have made a valid inference about the effects of the diet.
The accumulation of examples in which a possible cause and effect
co-occur, no matter how emotionally compelling, provides no
evidence of a true association. In McCarthy's case, parents who tried
her cure and had no success are unlikely to write to her. Parents
who didn't try the cure at all are even less likely to drop her a note
reporting that their children got better without trying it—especially
if they are among the millions of parents who have never even
heard of her proposal.

9 of 10 6/8/2010 5:36 PM
The Trouble With Intuition - The Chronicle Review - The Chronicle of Hi... http://chronicle.com/article/The-Trouble-With-Intuition/65674/

To know whether intuition should trump analysis, we need more


than case studies of initial impressions that were later vindicated.
What we need to know is how often experts intuitively identify a
forgery despite preliminary scientific analysis suggesting that it was
genuine (the kouros case), and how often experts intuitively believe
a piece to be genuine only to be proven wrong (the Thomas J. Wise
forgeries). Conversely, how often do experts make the mistake of
intuiting a forgery when scientific analysis later proves the work to
be authentic? Without comparing how frequently intuitions
outperform analysis for both genuine and fake items, there is no
way to draw general lessons about the power of intuition.

The kouros example is effective because it capitalizes on our


tendency to generalize from a single positive association, leading to
the conclusion that intuition trumps reason. But in this case, a bit of
thought would show that conclusion to be unlikely, even within the
confined realm of art fakery. Think about how often experts
throughout history have been duped by forgers because intuition
told them that they were looking at the real thing. It is ironic that
Gladwell (knowingly or not) exploits one of the greatest weaknesses
of intuition—our tendency to blithely infer cause from
anecdotes—in making his case for intuition's extraordinary power.

Intuition is not always wrong, but neither is it a shortcut around the


hard work of logical analysis and rational choice. The trouble with
intuition is that while intuitive modes of thought are easier to use
than analytical modes, they are poorly adapted to many
circumstances and decisions we face in the modern world. If we
follow our gut instincts, we will talk on the telephone while we
drive, have too much trust in eyewitnesses, and believe we know
what causes what—in health care, finance, politics, and every other
domain—without even realizing that we haven't considered the right
evidence, let alone come to the right conclusions.

Daniel J. Simons is a professor of psychology at the University of


Illinois at Urbana-Champaign. Christopher F. Chabris is an
assistant professor of psychology at Union College in New York.
They are the authors of the new book The Invisible Gorilla, and
Other Ways Our Intuitions Deceive Us (Crown Publishers).

10 of 10 6/8/2010 5:36 PM
Why losers have delusions of grandeur - NYPOST.com http://www.nypost.com/p/news/opinion/opedcolumnists/why_losers_hav...

Charles Darwin observed that “ignorance more frequently begets confidence than does
knowledge.” That was certainly true on the day in 1995 when a man named McArthur
Wheeler boldly robbed two banks in Pittsburgh without using a disguise. Security camera
footage of him was broadcast on the evening news the same day as the robberies, and he
was arrested an hour later. Mr. Wheeler was surprised when the police explained how
they had used the surveillance tapes to catch him. “But I wore the juice,” he mumbled
incredulously. He seemed to believe that rubbing his face with lemon juice would blur his
image and make him impossible to catch.

In movies, criminal masterminds often


are geniuses, James Bond villains in
volcano lairs. But the stereotype doesn’t
apply to actual cons, at least not the
ones who get caught.

Studies show those convicted of crimes


are, on average, less intelligent than
non-criminals. And they can be
spectacularly foolish. One of us had a
high school classmate who decided to
vandalize the school — by spray painting
his own initials on the wall. A Briton
named Peter Addison went one step
further and vandalized the side of a
building by writing “Peter Addison was
here.” Sixty-six-year-old Samuel Porter
tried to pass a one-million-dollar bill at a
supermarket in the United States and
became irate when the cashier wouldn’t
make change for him. All of these people
seem to have been under what we call
the “illusion of confidence,” which is the
persistent belief that we are more skilled
than we really are — in this case, that
the criminals were so good they would
not get caught.

The story of McArthur Wheeler was told


by social psychologists Justin Kruger
and David Dunning in a brilliant paper
ALTRENDO/GETTY IMAGES entitled “Unskilled and Unaware of It.” In
a set of clever experiments, Kruger and
Dunning showed that people with the
least skill are the most likely to
overestimate their abilities. For example,
they measured people’s sense of humor
(psychologists have learned that almost
anything can be measured) and found
that those who scored the lowest on their
test still thought they had a better-
than-average sense of what is funny.

These findings help to explain why


shows like “American Idol” and “Last
Comic Standing” attract so many
aspiring contestants who have no hope
of qualifying, let alone winning. Many are
just seeking a few seconds of TV time
and a shot at “Pants on the Ground”
fame, but some seem genuinely shocked
when the judges reject them.

It turns out that the illusion of confidence can survive even the measurement of skill.

Chess, for instance, has a mathematical rating system that provides up-to-date, accurate and
precise numerical information about a player’s “strength” (chess jargon for ability) relative to other
players. Ratings are public knowledge and are printed next to each player’s name on tournament
scoreboards. Ratings are valued so highly that chess players often remember their opponents
better by their ratings than by their names or faces. “I beat a 1600” or “I lost to a 2100” are not
uncommon things to hear in the hallway outside the playing room.

Armed with knowledge of their own ratings, players ought to be exquisitely aware of how competent
they are. But what do they actually think about their own abilities? Some years ago, in a study we
conducted with our colleague Daniel Benjamin, we asked a group of chess players at major
tournaments two simple questions: “What is your most recent official chess rating?” and “What do
you think your rating should be to reflect your true current strength?”

As expected, all of the players knew their actual ratings. Yet 75% of them thought that their rating
underestimated their true playing ability. The magnitude of their overconfidence was stunning: On
average, these competitive chess players estimated that they would win a match against another
player with the exact same rating as their own by a two-to-one margin — a crushing victory. Of
course, the most likely outcome of such a match would be a tie.

This tendency for the least skilled among us to overestimate their abilities the most has more
serious consequences than an inflated sense of humor or chess ability. Everyone has encountered
obliviously incompetent managers who make life miserable for their underlings because they suffer
from the illusion of confidence. And as the joke reminds us, the people who graduate last in their
medical school class are still doctors; what is less funny is that they probably believe they are still
the best ones.

Daniel Simons and Christopher Chabris are the authors of “The Invisible Gorilla, and Other Ways
Our Intuitions Deceive Us” (Crown). Visit their website at theinvisiblegorilla.com.

1 of 1 6/8/2010 5:41 PM

You might also like