Professional Documents
Culture Documents
Can Decision Making Improved
Can Decision Making Improved
Dolly Chugh
New York University
Max H. Bazerman
Harvard University
Acknowledgements: We thank Katie Shonk and Elizabeth Weiss for their assistance.
How Can Decision Making Be Improved? 1
Daniel Kahneman, Amos Tversky, and others have clarified the specific ways in
which decision makers are likely to be biased. As a result, we can now describe how
people make decisions with astonishing detail and reliability. Furthermore, thanks to the
normative models of economic theory, we have a clear vision of how much better
decision making could be. If we all behaved optimally, costs and benefits would always
be accurately weighed, impatience would not exist, gains would never be foregone in
order to spite others, no relevant information would ever be overlooked, and moral
behavior would always be aligned with moral attitudes. Unfortunately, we have little
understanding of how to help people overcome their many biases and behave optimally.
We propose that the time has come to move the study of biases in judgment and
strategies. While a few important insights about how to improve decision making have
already been identified, we argue that many others await discovery. We hope judgment
and decision-making scholars will focus their attention on the search for improvement
strategies in the coming years, seeking to answer the question: how can we improve
decision making?
Errors are costly: We believe the importance of this question is somewhat self-
governments, and societies, and if we knew more about how to improve those outcomes,
individuals, families, businesses, governments, and societies would benefit. After all,
errors induced by biases in judgment lead decision makers to undersave for retirement,
How Can Decision Making Be Improved? 2
engage in needless conflict, marry the wrong partners, accept the wrong jobs, and
wrongly invade countries. Given the massive costs that can result from suboptimal
decision making, it is critical for our field to focus increased effort on improving our
Errors will get even costlier: The costs of suboptimal decision making have
grown, even since the first wave of research on decision biases began fifty years ago. As
decision. In addition, more and more people are being tasked with making decisions that
are likely to be biased – because of the presence of too much information, time pressure,
increasingly global, each biased decision is likely to have implications for a broader
swath of society.
businesspeople, physicians, politicians, lawyers, private citizens, and many other groups
for whom failures to make optimal choices can be extremely costly, limitations
uncovered by researchers in our field are widely publicized and highlighted to students in
many different professional and undergraduate degree programs. Those who are exposed
to our research are eager to learn the practical implications of the knowledge we have
accumulated about biased decision making so they can improve their own outcomes.
However, our field primarily offers description about the biases that afflict decision
makers without insights into how errors can be eliminated or at least reduced.
How Can Decision Making Be Improved? 3
interested in the mental processes that underlie biased judgment. Through rigorous
testing of what does and what does not improve decision making, researchers are sure to
This will deepen our already rich descriptive understanding of decision making.
making errors, the next question is where to begin? To address this question, we
organize the scattered knowledge that judgment and decision-making scholars have
amassed over the last several decades about how to reduce biased decision making. Our
most promising avenues for future research on cures for biased decision making.
important to note how difficult finding solutions has proved to be. In 1982, Fischhoff
reviewed the results of four strategies that had been proposed as solutions for biased
decision making: (1) offering warnings about the possibility of bias; (2) describing the
direction of a bias; (3) providing a dose of feedback; and (4) offering an extended
program of training with feedback, coaching, and other interventions designed to improve
the first three strategies yielded minimal success, and even intensive, personalized
Moore, 2008). This news was not encouraging for psychologists and economists who
hoped their research might improve people’s judgment and decision-making abilities.
We believe that Stanovich and West’s (2000) distinction between System 1 and
System 2 cognitive functioning provides a useful framework for organizing both what
scholars have learned to date about effective strategies for improving decision making
and future efforts to uncover improvement strategies. System 1 refers to our intuitive
system, which is typically fast, automatic, effortless, implicit, and emotional. System 2
available information, face time and cost constraints, and maintain a relatively small
amount of information in their usable memory. The busier people are, the more they
have on their minds, and the more time constraints they face, the more likely they will be
to rely on System 1 thinking. Thus, the frantic pace of life is likely to lead us to rely on
overcoming specific decision biases by shifting people from System 1 thinking to System
2 thinking.1 One successful strategy for moving toward System 2 thinking relies on
replacing intuition with formal analytic processes. For example, when data exists on past
inputs to and outcomes from a particular decision-making process, decision makers can
construct a linear model, or a formula that weights and sums the relevant predictor
1
It should be noted that many strategies designed to reduce decision biases by encouraging System 2
thinking have proven unsuccessful. For example, performance based pay, repetition, and high stakes
incentives have been shown to have little if any effect on a wide array of biases in judgment.
How Can Decision Making Be Improved? 5
variables to reach a quantitative forecast about the outcome. Researchers have found that
linear models produce predictions that are superior to those of experts across an
impressive array of domains (Dawes, 1971). The value of linear models in hiring,
admissions, and selection decisions is highlighted by research that Moore, Swift, Sharek,
and Gino (2007) conducted on the interpretation of grades, which shows that graduate
school admissions officers are unable to account for the leniency of grading at an
schools. The authors argue that it would be easy to set up a linear model to avoid this
error (for example, by including in its calculation only an applicant’s standardized GPA,
adjusted by her school’s average GPA). In general, we believe that the use of linear
models can help decision makers avoid the pitfalls of many judgment biases, yet this
method has only been tested in a small subset of the potentially relevant domains.
remove oneself mentally from a specific situation or to consider the class of decisions to
which the current problem belongs (Kahnmean and Lovallo, 1993). Taking an outsider’s
perspective has been shown to reduce decision makers’ overconfidence about their
knowledge (Gigerenzer, Hoffrage, & Kleinbölting, 1991), the time it would take them to
complete a task (Kahneman & Lovallo, 1993), and their odds of entrepreneurial success
(Cooper, Woo, and Dunkelberg, 1988). Decision makers may also be able to improve
their judgments by asking a genuine outsider for his or her view regarding a decision.
Other research on the power of shifting people toward System 2 thinking has
shown that simply encouraging people to “consider the opposite” of whatever decision
they are about to make reduces errors in judgment due to several particularly robust
How Can Decision Making Be Improved? 6
decision biases: overconfidence, the hindsight bias, and anchoring (Larrick, 2004;
Mussweiler, Strack, & Pfeiffer, 2000). Partial debiasing of errors in judgment typically
classified as the result of “biases and heuristics” (see Tversky and Kahneman, 1974) has
also been achieved by having groups rather than individuals make decisions, training
individuals in statistical reasoning, and making people accountable for their decisions
hypothesized to be the source of bias with a targeted cue to rely on System 2 processes
(Slovic and Fischhoff, 1977). In a study designed to reduce hindsight bias (the tendency
to exaggerate the extent to which one could have anticipated a particular outcome in
foresight), Slovic and Fischhoff developed a hypothesis about the mechanism producing
the bias. They believed that hindsight bias resulted from subjects’ failure to use their
available knowledge and powers of inference. Armed with this insight, Slovic and
Fischhoff hypothesized and found that subjects were more resistant to the bias if they
were provided with evidence contrary to the actual outcome. This result suggests that the
most fruitful directions for researchers seeking to reduce heuristics and biases may be
those predicated upon “some understanding of and hypotheses about people’s cognitive
processes” (Fischhoff, 1982) and how they might lead to a given bias. Along these lines,
another group of researchers hypothesized that overclaiming credit results from focusing
only on estimates of one’s own contributions and ignoring those of others in a group.
They found that requiring people to estimate not only their own contributions but also
those of others reduces overclaiming (Savitsky, Van Boven, Epley, and Wight, 2005).
How Can Decision Making Be Improved? 7
Another promising stream of research that examines how System 2 thinking can
be leveraged to reduce System 1 errors has shown that analogical reasoning can be used
to reduce bounds on people’s awareness (see Bazerman and Chugh 2005 for more on
(2000), both Idson, Chugh, Bereby-Meyer, Moran, Grosskopf, and Bazerman (2004) and
Moran, Ritov, and Bazerman (2008) found that individuals who were encouraged to see
and understand the common principle underlying a set of seemingly unrelated tasks
that relied on the same underlying principle. This work is consistent with Thompson et
al.’s (2000) observation that surface details of learning opportunities often distract us
move from suboptimal System 1 thinking toward improved System 2 thinking when they
consider and choose between multiple options simultaneously rather than accepting or
rejecting options separately. For example, Bazerman, White and Loewenstein (1995)
find evidence that people display more bounded self-interest (Jolls, Sunstein, and Thaler,
1998) – focusing on their outcomes relative to those of others rather than optimizing their
own outcomes – when assessing one option at a time than when considering multiple
options side by side. Bazerman, Loewenstein and White (1992) have also demonstrated
that people exhibit less willpower when they weigh choices separately rather than jointly.
The research discussed above suggests that any change in a decision’s context that
promotes cool-headed System 2 thinking has the potential to reduce common biases
How Can Decision Making Be Improved? 8
resulting from hotheadedness, such as impulsivity and concern about relative outcomes.
Research on joint-versus-separate decision making highlights the fact that our first
impulses tend to be more emotional than logical (Moore and Loewenstein, 2004). Some
additional suggestive results in this domain include the findings that willpower is
weakened when people are placed under extreme cognitive load (Shiv and Fedorkihn,
1999) and when they are inexperienced in a choice domain (Milkman, Rogers and
Bazerman, 2008). Other research has shown that people make less impulsive, sub-
optimal decisions in many domains when they make choices further in advance of their
consequences (see Milkman, Rogers and Bazerman, in press, for a review). A question
we pose in light of this research is when and how carefully selected contextual changes
promoting increased cognition can be leveraged to reduce the effects of decision making
biases?
Albert Einstein once said, “We can't solve problems by using the same kind of
thinking we used when we created them.” However, it is possible that the unconscious
mental system can, in fact, do just that. In recent years, a new general strategy for
improving biased decision making has been proposed that leverages our automatic
cognitive processes and turns them to our advantage (Sunstein and Thaler, 2003). Rather
than trying to change a decision maker’s thinking from System 1 to System 2, this
strategy tries to change the environment so that System 1 thinking will lead to good
results. This type of improvement strategy, which Thaler and Sunstein discuss at length
in their book Nudge (2008), calls upon those who design situations in which choices are
made (whether they be the decision makers themselves or other “choice architects”) to
How Can Decision Making Be Improved? 9
maximize the odds that decision makers will make wise choices given known decision
biases. For example, a bias towards inaction creates a preference for default options
(Ritov and Baron, 1992). Choice architects can use this insight to improve decision
making by ensuring that the available default is the option that is likely to be best for
decision makers and/or society. Making 401k enrollment a default, for instance, has been
shown to significantly increase employees’ savings rates (Benartzi and Thaler, 2007).
biases that people do not like to admit or believe they are susceptible to. For instance,
many of us are susceptible to implicit racial bias but feel uncomfortable acknowledging
this fact, even to ourselves. Conscious efforts to simply “do better” on implicit bias tests
are usually futile (Nosek, Greenwald, & Banaji, 2007). However, individuals whose
rather than a white experimenter show less implicit racial bias (Lowery, Hardin, &
Sinclair, 2001; Blair, 2002). The results of this “change the environment” approach
contrast sharply with the failure of “try harder” solutions, which rely on conscious effort.
In summary, can solutions to biases that people are unwilling to acknowledge be found in
Conclusion
People put great trust in their intuition. The past 50 years of decision-making
research challenges that trust. A key task for psychologists is to identify how and in what
situations people should try to move from intuitively compelling System 1 thinking to
more deliberative System 2 thinking and to design situations that make System 1 thinking
How Can Decision Making Be Improved? 10
work in the decision-maker’s favor. Clearly, minor decisions do not require a complete
understand the repercussions of System 1 thinking, the more deeply we desire empirically
tested strategies for reaching better decisions. Recent decades have delivered description
in abundance. This paper calls for more research on strategies for improving decisions.
References
Bazerman, M. H. & Chugh, D. (2005). Bounded awareness: Focusing failures in negotiation. In L.
Thompson (Ed.), Frontiers of Social Psychology: Negotiation. Psychological Press.
Bazerman, M. H., Loewenstein, G. F., & White, S. B. (1992). Reversals of preference in allocation
decisions: Judging an alternative versus choosing among alternatives. Administrative Science Quarterly,
37(2), 220-240.
Bazerman, M.H. & Moore, D. (2008). Judgment in Managerial Decision Making (7th ed.). Hoboken, NJ:
John Wiley & Sons, Inc.
Bazerman, M.H., White, S.B., & Loewenstein, G.F. (1995). Perceptions of Fairness in Interpersonal and
Individual Choice Situations. Current Directions in Psychological Science, 4, 39-43.
Blair, I. V. (2002). The malleability of automatic stereotypes and prejudice. Personality & Social
Psychology Review, 6(3), 242-261.
Benartzi, S., & Thaler, R. H. (2007). Heuristics and biases in retirement savings behavior. Journal of
Economic Perspectives, 21(3), 81-104.
Cooper, A. C., Woo, C. Y., & Dunkelberg, W. C. (1988). Entrepreneurs' perceived chances for success.
Journal of Business Venturing, 3(2), 97-109.
Dawes, R. M. (1971). A case study of graduate admissions: Application of three principles of human
decision making. American Psychologist, 26(2), 180-188.
Fischhoff, B. (1982). Debiasing. In D. Kahneman, P. Slovic, & A. Tversky (Eds.), Judgment Under
Uncertainty: Heuristics and Biases. Cambridge: Cambridge University Press: 422 – 444.
Idson, L. C., Chugh, D., Bereby-Meyer, Y., Moran, S., Grosskopf, B., & Bazerman, M. (2004).
Overcoming focusing failures in competitive environments. Journal of Behavioral Decision Making, 17(3),
159-172.
Johnson, Eric J. and Goldstein, D. (2003). Science. 302 (5649), 1338 – 1339.
Jolls, C., Sunstein, C. R., & Thaler, R. (1998) A behavioral approach to law and economics. Stanford Law
Review, 50, 1471-1550
Kahneman, D., & Lovallo, D. (1993). Timid choices and bold forecasts: A cognitive perspective on risk
and risk taking. Management Science, 39, 17-31.
Larrick, R. P. (2004). Debiasing. In D. J. Koehler & N. Harvey (Eds.), Blackwell Handbook of Judgment
and Decision Making. Oxford, England: Blackwell Publishers.
Lerner, J. S., & Tetlock, P. E. (1999). Accounting for the effects of accountability. Psychological Bulletin,
125(2), 255-275.
How Can Decision Making Be Improved? 11
Lowery, B. S., Hardin, C. D., & Sinclair, S. (2001). Social influence effects on automatic racial prejudice.
Journal of Personality and Social Psychology, 81(5), 842-855.
Milkman, K.L., Rogers, T., & Bazerman, M. (2008). Highbrow films gather dust: A study of dynamic
inconsistency and online DVD rentals. HBS Working Paper 07-099.
Milkman, K.L., Rogers, T., & Bazerman, M. (in press). Harnessing our inner angels and demons: What
we have learned about want/should conflict and how that knowledge can help us reduce short-sighted
decision making. Perspectives on Psychological Science.
Moore, D., & Lowenstein, G. (2004). Self-interest, automaticity, and the psychology of conflict of interest.
Social Justice Research, 17(2), 189-202.
Moore, D. A., Swift, S. A., Sharek, Z., & Gino, F. (2007). Correspondence bias in performance evaluation:
Why grade inflation works. Tepper Working Paper 2004-E42.
Moran, S., Ritov, I., & Bazerman, M.H. (2008). Details need to be added.
Mussweiler, T., Strack, F., & Pfeiffer, T. (2000). Overcoming the inevitable anchoring effect: Considering
the opposite compensates for selective accessibility. Personality and Social Psychology Bulletin, 26, 1142-
1150.
Nosek, B. A., Greenwald, A. G., & Banaji, M. R. (2007). The Implicit Association Test at age 7: A
methodological and conceptual review. In J. A. Bargh (Ed.), Social Psychology and the Unconscious: The
Automaticity of Higher Mental Processes, New York: Psychology Press.
Ritov, I., & Baron, J. (1992). Status-quo and omission biases. Journal of Risk & Uncertainty, 5(1), 49-61.
Savitsky, K., Van Boven, L., Epley, N., & Wight, W. (2005). The unpacking effect in responsibility
allocations for group tasks. Journal of Experimental Social Psychology, 41, 447-457.
Shiv, B., & Fedorikhin, A. (1999). Heart and mind in conflict: The interplay of affect and cognition in
consumer decision making. Journal of Consumer Research, 26(3), 278-292.
Slovic, P. & Fischhoff, B. (1977). On the psychology of experimental surprises. Journal of Experimental
Psychology: Human Perception and Performance, 3, 544-551: 23, 31.
Stanovich, K. E., & West, R. F. (2000). Individual differences in reasoning: Implications for the rationality
debate. Behavioral & Brain Sciences, 23, 645-665.
Sunstein, C.R. & Thaler, R.H (2003). Libertarian paternalism is not an oxymoron. University of Chicago
Law Review, 70 (Fall), 1159-99.
Thaler, R. H., & Sunstein, C. R. (2008). Nudge. New Haven: Yale University Press.
Thompson, L., Gentner, D., & Loewenstein, J. (2000). Avoiding missed opportunities in managerial life:
Analogical training more powerful than individual case training. Organizational Behavior & Human
Decision Processes, 82(1), 60-75.
Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science,
185(4157), 1124-1131.