# Heuristics and Biases

Many decisions are based on beliefs concerning the likelihood of uncertain events. Occasionally, beliefs concerning uncertain events are expressed in numerical form as odds or subjective probabilities. The subjective assessment of probability involve judgements based on data of limited validity, which are processed according to heuristic rules. However, the reliance on this rule leads to systematic errors. Such biases are also found in the intuitive judgement of probability. Kahneman and Tversky1describe heuristics that are employed to assess probabilities and to predict values. Biases to which these heuristics lead are enumerated, and the applied and theoretical implications of these observations are discussed. This discussion below is based broadly on writings by Kahneman and Tversky, the following biases are discussed:          Law of Small Numbers. Anchors. Availability. Affect Heuristic. Representativeness. Conjuctive fallacy. Stereotyping. Regression to the mean. Substitution.

Kahneman2 starts with the notion that our minds contain two interactive modes of thinking: One part of our mind (which he calls System 1) operates automatically and quickly, with little or no effort and no sense of voluntary control. The other part of our mind (which he calls System 2) allocates attention to the effortful mental activities that demand it, including complex computations. The operations of System 2 are often associated with the subjective experience of agency, choice, and concentration.3 In other words, System 1 is unconscious, intuitive thought (automatic pilot), while slower System 2 is conscious, rational thinking (effortful system).

1

Amos Tversky and Daniel Kahneman, Judgement under Uncertainty: Heuristics and Biases, 1974.
2 3

Daniel Kahneman, Thinking, Fast and Slow (2011).
Ibid-page 21

10 4 5 Ibid-chapter 2 Ibid-chapter 3 6 Ibid-chapter 4 7 Ibid-chapter 5 8 Ibid-chapter 6 9 Ibid-chapter 7 10 Ibid-chapter 8 .4 System 2 is by its nature lazy. and continuously generating assessments of various aspects of the situation without specific intention and with little or no effort. which can lead to intuitive errors.5 System 1 works in a process called associative activation: ideas that have been evoked trigger connected coherent ideas. In addition.6 System 2 is required when cognitive strain comes up due to unmet demands of the situation which require the system 2 to focus. The interactions of Systems 1 and 2 are usually highly efficient. and System 2 is often lazy. However.8 System 1 is a kind of machine which jumps to conclusions.7 Our system 1 develops our image of what is normal and associative ideas are formed which represent the structure of events in our life and represent the structure of events in our life and interpretation of the present as well as expectation of the future. only System 2 can construct thoughts in a step-by-step fashion. The mind cannot consciously perform the thousands of complex tasks per day that human functioning requires. it continuously monitors human behavior. System 2 requires effort and acts of self-control in which the intuitions and impulses of System 1 are overcome. most of our actions are controlled automatically by System 1.When we are awake. System 2 activates when System 1 cannot deal with a task–when more detailed processing is needed. These basic assessments are easily substituted for more difficult questions. System 1 is prone to biases and errors.9 System 1 forms basic assesments by continuously monitoring what is going on inside and outside the mind. which may be prevented by a deliberate intervention of System 2. System 2 is normally in a loweffort mode. Attention and Effort requires the lazy system 2 to act.

They use their judgement. Predicting results is based on the following facts: Results of large samples deserve more trust than smaller samples. which is commonly flawed.13 Random events by definition do not behave in a systematic fashion. but they can also lead to wrong conclusions (biases) because they sometimes substitute an easier question for the one asked. traditionally psychologists do not use calculations to decide on sample size. We first discuss the law of small numbers which basically states that researchers who pick too small a sample leave themselves at the mercy of sampling luck. answers to difficult questions. The effortful part of our mind is capable of doubt. This can lead to an illusion of causation. • Small samples yield extreme results more often than large samples do. the following two statements mean exactly the same thing: • large samples are more precise than small samples. . A type of heuristic is the halo effect–“the tendency to like (or dislike) everything about a person–including things you have not observed. Let us repeat the following result: “researchers who pick too small a sample leave themselves at the mercy of sampling luck”. though often imperfect. People are not adequately sensitive to sample-size. more significantly. The automatic part of our mind is not prone to doubt.”11 Heuristics. because it can maintain incompatible possibilities at the same time. But also. It suppresses ambiguity and spontaneously constructs stories that are as coherent as possible. but collection of random events do behave in a highly regular fashion..We first define Heuristics–“a simple procedure that helps find adequate. we know this as the law of large numbers. The strong bias toward believing that small samples closely resemble the population from which they are drawn is also part of a larger story: 11 12 13 Ibid-page 98 Ibid-page 82 Ibid-chapter 10 .”12 A simple example is rating a baseball player as good at pitching because he is handsome and athletic. allow humans to act fast.

an automatic manifestation of System 1. . Causal explanations of chance events are inevitably wrong. an operation of System 2. 14 Ibid-chapter 11. including accidents of sampling. and when we detect what appears to be a rule. The estimate for a number then stays close to the anchor. We do not expect to see regularity produced by a random process. two groups estimated Gandhi’s age when he died. Many facts of the world are due to chance. Our prelidiction for causal thinking exposes us to serious mistakes in evaluating the randomness of truly random events. Insufficient adjustment neatly explains why you are likely to drive too fast when you come off the highway into city streets-especially if you are talking with someone as you drive.we are prone to exaggerate the consistency and coherence of what we see. We are pattern seekers. People are influenced when they consider a particular value for an unknown number before estimating that number. For example. believers in a coherent world.14 Two different mechanisms produce anchoring effects-one for each system. we quickly reject the idea that the process is truly random. There is a form of anchoring that occurs in a deliberate process of adjustment. The law of small numbers is part of two larger stories about the workings of the mind. Random processes produce many sequences that convince people that the process is not random after all. • The exaggerated faith in small samples is only one example of a more general illusion-we pay more attention to the content of messages than to information about their reliability. The first group then estimated a higher number for when he died than the second one. Another example of a heuristic bias is when judgments are influenced by an uninformative number (an anchor). a second group was asked whether he was 35 or older. • Statistics produce many observations that appear to beg for causal explanations but do not lend themselves to such explanations. in which regularities appear not by accident but as a result of mechanical causality or of someone’s intention. And there is anchoring that occurs by a priming effect. The first group were initially asked whether he was more than 114. which results from an associative activation in System 1.

unless it is immediately rejected as a lie. either because their memory is loaded with digits or because they are slightly drunk. A key finding of anchoring research is that anchors that are obviously random can be just as effective as potentially informative anchors. A process that resembles suggestion is indeed at work in many situations: System 1 tries its best to construct a world in which the anchor is the true number. A strategy of deliberately “thinking the opposite” may be a good defense against anchoring effects.Adjustment is a deliberate attempt to find reasons to move away from the anchor: people who are instructed to shake their head when they hear the anchor. Suggestion and anchoring are both explained by the same automatic operation of System 1. because it negates the biased recruitment of thoughts that produces these effects. move farther from the anchor. Insufficient adjustment is a failure of a weak or lazy System 2. which selectively evokes compatible evidence. The gist of the message is the story. A message. Anchors clearly do not have their effects because people believe they are informative. Suggestion is a priming effect. will have the same effect on the associative system regardless of its reliability. System 1 understands sentences by trying to make them true. And of course there are quite a few people who are willing and able to exploit our gullibility. People adjust less (stay closer to the anchor) when their mental resources are depleted. Adjustment is an effortful operation. The psychological mechanisms that produce anchoring make us far more suggestible than most of us would want to be. which is based on whatever . System 2 is susceptible to the biasing influence of anchors that make some information easier to retrieve. as if they rejected it. sometimes to insufficient adjustment-are everywhere. Anchoring effects-sometimes due to priming. and the selective activation of compatible thoughts produces a family of systematic errors that make us gullible and prone to believe too strongly whatever we believe. and people who nod their head show enhanced anchoring.

or mere words. this heuristic is known to be both a deliberate problem solving strategy and an automatic operation. or statistics. Substitution of questions inevitably produces systematic errors. The powerful effect of random anchors is an extreme case of this phenomenon. • A dramatic event temporarily increases the availability of its category. pictures.: • A salient event that attracts your attention will be easily retrieved from memory. . One of the best-known studies of availability suggests that awareness of your own biases can contribute to peace in marriages. Whether the story is true. We now know the answer: none. even if the quantity of of the information is slight and its quality is poor. The concept of availability is the process of judging frequency by “ the ease with which instances come to mind. but tiresome.information is available. matters little. and probably in other joint projects. 15 Ibid-chapter 12. like other heuristics of judgement. or believable. substitutes one question for another: you wish to estimate the size of a category or the frequency of an event. Resisting this large collection of potential availability biases is possible. The ease with which instances comes to mind is a System 1 heuristic. if at all. by the environment of the moment. The availability heuristic. Anchoring results from associative activation.”. which is replaced by a focus on content when System 2 is more engaged. The main moral of priming research is that our thoughts and our behaviour are influenced. but you report an impression of the ease with which instances come to mind. • Personal experiences. much more than we know or want. because a random anchor obviously provides no information at all. and vivid examples are more available than incidents that happened to others.15 A question considered early was how many instances must be retrieved to get an impression of the ease with which they come to mind.

this has impacts on public policy. the importance of an idea is often judged by the fluency (and emotional charge ) with which that idea comes to mind. whether by individuals or governments. When they are in a good mood. Are or made to feel powerful. Esimates of causes of death are warped by media coverage. of course. Availability cascades are real and they undoubtedly distort priorities in the allocation of public resources. The coverage is itself biased towards novelty and poignancy. the memories of the disaster dim over time. The media do not just shape what the public is interested in. Notion of an affect heuristic was developed in which people make judgements and decisions by consulting their emotions: Do I like it? Do I hate it? How strongly do I feel about it? “The emotional tail wags the rational dog. If they are knowledgeable novices. are usually designed to be adequate to the worst disaster actually experienced.” The affect heuristic simplifies our lives by creating a world that is much tidier than reality. a particularly important concept is: the availability cascade. If they are depressed. we often face painful trade-offs between benefits and costs. The concept of an affect heuristic is one in which people make judgements and decisions by consulting their emotions. and so do worry and diligence.16 Availability effects help explain the pattern of insurance purchases and protective action after disasters.• • • • • • People who let themselves be guided by System 1 are more strongly susceptible to availability biases than others who are in a higher state of vigilance. Victims and near victims are very concerned after a disaster. . The following are some conditions in which people “go with the flow” and are affected more strongly by ease of retrieval than by the content they retrieved: When they are engaged in another effortful task. However. particularly with reference to the effect of the media. In the real world. Protective actions. but are also shaped by it. Faith in intuition. One perspective is offered by Cass Sunstein who would seek mechanisms that insulate decision makers 16 Ibid-chapter 13.

Judging probability by representativeness has important virtues: the intuitive impressions that it produces are often-indeed. . Representativeness involves ignoring both the base rates and the doubts about the veracity of the description. Paul Slovic on the other hand trusts the experts much less and the public somewhat more than Sunstein does. Logicians and statisticians have developed competing definitions of probability.17 In the absence of specific information about a subject. but anyone who ignores base rates and the quality of evidence in probability assessments will certainly make mistakes. because they do not try to judge probability as statisticians and philosophers use the word. usually-more accurate than chance guesses would be. you will go by the base rates. A question about probability or likelihood activates a mental shotgun. 17 Ibid-chapter 14. is an automatic activity of System1.from public pressures. evoking answers to easier questions. Although it is common. evoking answers to easier questions. This is a serious mistake. In contrast people who are asked to assess probability are not stumped. letting the allocation of resourcesa be determined by impartial experts who have a broad view of all risks and of the resources available to reduce them. Activation of association with a stereotype. It is entirely acceptable for judgements of similarity to be unaffected by the base rates and also by the possibility that the description was inaccurate. People who are asked to assess probability are not stumped. prediction by representativeness is not statistically optimal. all very precise. A question about probability or likelihood activates a mental shotgun. because judgements of similarity and probability are not constrained by the same logical rules. and he points out that insulating the experts from the emotions of the public produces policies that the public will reject-an impossible situation in a democracy. because they do not try to judge probability as statisticians and philosophers use the word.

In other situations. One sin of representativeness is an excessive willingness to predict the occurrence of unlikely (low base-rate) events. even in the presence of evidence about the case at hand. while the instruction to “think like a clinician” had the opposite effect. Others make the same mistake because they are not focussed on the task. Instructing people to “think like a statistician” enhanced the use of base rate information. So.Judging probability by representativeness has important virtues: the intuitive impressions that it produces are often-indeed. People without training in statistics are quite capable of using base rates in predictions under some conditions. To be useful your beliefs should be constrained by the logic of probability. Amos and I introduced the idea of a conjunction fallacy. The second is that intuitive impressions of the diagnosity of evidence are often exaggerated. 18 Ibid-chapter 15 . in general. This is often not intuitively obvious. The word fallacy is used. The first is that base rates matter. Some people ignore base rates because they believe them to be irrelevant in the presence of individual information. which people commit when they judge a conjunction of two events to be more probable than one of the events in a direct comparision. A conjunction fallacy is one which people commit when they judge a conjunction of two events to be more probable than one of the events in a direct comparision. when people fail to apply a logical rule that is obviously relevant. usually-more accurate than chance guesses would be. There are two ideas to keep in mind about Bayesian reasoning and how we tend to mess it up. there is a conflict between the intuition of representativeness and the logic of probability. The relevant “rules” for such cases are provided by Bayesian Statistics: the logic of how people should change their mind in the light of evidence. The second sin of representativeness is insensitivity to the quality of evidence.18 When you specify a possible event in greater detail you can only lower its probability. the stereotypes are false and the representativeness heuristic will mislead. especially if it causes people to neglect base-rate information that points in another direction.

Adding detail to scenarios makes them more persuasive. makes it easy to appreciate that one group is wholly included in the other. as it is known. in the sense that statistical base rates are generally underweighted and causal base rates are considered as information about the individual.The fallacy remains attractive even when you recognise it for what it is. The laziness of System 2 is an important fact of life.19 This chapter considers a standard problem of Bayesian inference. intuition often overcame logic even in joint evaluation. The solution to the puzzle appears to be that a question phrased as “how many?’ makes you think of individuals. in contrast. Less is more:sometimes even in joint evaluation: the scenario that is judged more probable is unquestionably more plausible. The frequency representation. You can probably guess what people do when faced wth this problem: they ignore the base rate and go with the witness. The blatant violations of the logic of probability that we had observed in transparent problems were interesting. The uncritical substitution of plausibility for probability has pernicious effects on judgements when scenarios are used as tools of forecasting. Causes trump statistics. and the observation that representativeness can block the application of an obvious logical rule is also of some interest. although we identified some conditions in which logic prevails. but less likely to come true. a more coherent fit with all that is known. In other problems. There are two items of information: a base rate and the imperfectly reliable testimony of a witness. 19 Ibid-chapter 16 . but the same question phrased as “ what percentage?” does not. A reference to a number of individuals brings a spatial representation to mind. Intuition governs judgments in the between-subjects condition:logic rules in joint evaluation.