Professional Documents
Culture Documents
Question Certainty
Question Certainty
Question Certainty
DECISION MAKING
by Walter Frick
From the October 2015 Issue
A
James Graham
ccording to legend, around 550 BC, Croesus, the king of Lydia, held one of the world’s earliest prediction tournaments. He sent emissaries to seven oracles to ask them to
foretell what he would be doing that day. Pythia, the Oracle of Delphi, answered correctly: He would be cooking lamb-and-tortoise stew.
Croesus didn’t perform this exercise out of mere curiosity. He had a decision to make. Confident that he’d discovered a reliable oracle, the king then asked Pythia whether he should
attack Persia. She said that if he did, he would destroy a mighty empire. Croesus attacked but was defeated. The problem was interpretation: Pythia never said which mighty empire
would be destroyed.
Whether the story is fact or fiction, Croesus’ defeat illuminates a couple of truths: Forecasting is difficult, and pundits often claim their predictions have come true when they
haven’t. Still, accurate predictions are essential to good decision making—in every realm of life. As Philip Tetlock, a professor of both management and psychology at the University
of Pennsylvania, and his coauthor, Dan Gardner, write in Superforecasting, “We are all forecasters. When we think about changing jobs, getting married, buying a home, making an
investment, launching a product, or retiring, we decide based on how we expect the future to unfold.”
So what is the secret to making better forecasts? From 1984 to 2004 Tetlock tracked political pundits’ ability to predict world events, culminating in his 2006 book Expert Political
Judgment. He found that overall, his study subjects weren’t very good forecasters, but a subset did perform better than random chance. Those people stood out not for their
credentials or ideology but for their style of thinking. They rejected the idea that any single force determines an outcome. They used multiple information sources and analytical tools
and combined competing explanations for a given phenomenon. Above all, they were allergic to certainty.
Superforecasting describes Tetlock’s work since. In 2011 he and his colleagues entered a prediction tournament sponsored by the U.S. government’s Intelligence Advanced Research
Projects Activity. They recruited internet users to forecast geopolitical events under various experimental conditions, and—harnessing the wisdom of this crowd—they won. In the
process, they found another group of “superforecasters” to study. Most weren’t professional analysts, but they scored high on tests of intelligence and open-mindedness. Like
Tetlock’s other experts, they gave weight to multiple perspectives and weren’t afraid to change their opinions. They were curious, humble, self-critical, and less likely than most other
people to believe in fate. And although they seldom used math to make their predictions, all were highly numerate. “I have yet to find a superforecaster who isn’t comfortable with
numbers,” Tetlock writes.
According to Tetlock, the actual conversation never would have gone that way, because Leon Panetta, the real CIA director at the time, was comfortable using probability and diverse
estimates to make his decisions. In fact, as Micah Zenko recounts in another new book, Red Team, the CIA conducted three separate “red team” exercises before the raid, all designed
to check and challenge analysts’ assumptions. Although the real “Maya” did give that 95% estimate, she and her team were made to completely review their work. The CIA also
appointed four outside analysts to study the case, and the National Counterterrorism Center, a separate agency, conducted its own analysis, generating three probabilities that bin
Laden was in the compound: 75%, 60%, and 40%. President Obama concluded that this amounted to “a flip of the coin,” but he did, of course, authorize the raid. Tetlock dislikes
Obama’s analogy (his superforecasters would have been more precise) but not the overall process. Maya’s estimate was “more extreme than the evidence could support” and
therefore “unreasonable,” he explains. In his world, such confidence is cause for skepticism.
The Zenko book is a good complement to Superforecasting, because it shows how organizations, not just individuals, can overcome their biases toward false certainty and make good
predictions, in geopolitics and business, in public and private sectors. With simulations, vulnerability probes, and alternative analyses that offer fresh eyes on a complex situation or
intentionally oppose a certain position, red teams can greatly improve the accuracy of forecasts in the same way that Tetlock’s experts do.
Zenko adds that management must buy in, committing significant resources to red teams and empowering them to be brutally honest in their analyses. Tetlock agrees. Although
great leaders should be confident and decisive, they must also possess “intellectual humility” and recognize that the world is complex and full of uncertainty, he explains. They
should learn from and lean on superforecasters and red teams, using not just one but many. If Croesus had asked all seven oracles about his planned attack on Persia, for example, he
might not have lost his empire.
A version of this article appeared in the October 2015 issue (pp.130–131) of Harvard Business Review.
Comments
Leave a Comment
Post Comment
2 COMMENTS
POSTING GUIDELINES
We hope the conversations that take place on HBR.org will be energetic, constructive, and thought-provoking. To comment, readers must sign in or register. And to ensure the quality of the discussion, our moderating team will review all comments and may edit them for clarity,
length, and relevance. Comments that are overly promotional, mean-spirited, or off-topic may be deleted per the moderators' judgment. All postings become the property of Harvard Business Publishing.