Heuristics and Biases

Many decisions are based on beliefs concerning the likelihood of uncertain events. Occasionally, beliefs concerning uncertain events are expressed in numerical form as odds or subjective probabilities. The subjective assessment of probability involve judgements based on data of limited validity, which are processed according to heuristic rules. However, the reliance on this rule leads to systematic errors. Such biases are also found in the intuitive judgement of probability. Kahneman and Tversky1describe heuristics that are employed to assess probabilities and to predict values. Biases to which these heuristics lead are enumerated, and the applied and theoretical implications of these observations are discussed. This discussion below is based broadly on writings by Kahneman and Tversky, the following biases are discussed:          Law of Small Numbers. Anchors. Availability. Affect Heuristic. Representativeness. Conjuctive fallacy. Stereotyping. Regression to the mean. Substitution.

Kahneman2 starts with the notion that our minds contain two interactive modes of thinking: One part of our mind (which he calls System 1) operates automatically and quickly, with little or no effort and no sense of voluntary control. The other part of our mind (which he calls System 2) allocates attention to the effortful mental activities that demand it, including complex computations. The operations of System 2 are often associated with the subjective experience of agency, choice, and concentration.3 In other words, System 1 is unconscious, intuitive thought (automatic pilot), while slower System 2 is conscious, rational thinking (effortful system).

1

Amos Tversky and Daniel Kahneman, Judgement under Uncertainty: Heuristics and Biases, 1974.
2 3

Daniel Kahneman, Thinking, Fast and Slow (2011).
Ibid-page 21

10 4 5 Ibid-chapter 2 Ibid-chapter 3 6 Ibid-chapter 4 7 Ibid-chapter 5 8 Ibid-chapter 6 9 Ibid-chapter 7 10 Ibid-chapter 8 .4 System 2 is by its nature lazy. and continuously generating assessments of various aspects of the situation without specific intention and with little or no effort. which can lead to intuitive errors.5 System 1 works in a process called associative activation: ideas that have been evoked trigger connected coherent ideas. In addition.6 System 2 is required when cognitive strain comes up due to unmet demands of the situation which require the system 2 to focus. The interactions of Systems 1 and 2 are usually highly efficient. and System 2 is often lazy. However.8 System 1 is a kind of machine which jumps to conclusions.7 Our system 1 develops our image of what is normal and associative ideas are formed which represent the structure of events in our life and represent the structure of events in our life and interpretation of the present as well as expectation of the future. only System 2 can construct thoughts in a step-by-step fashion. The mind cannot consciously perform the thousands of complex tasks per day that human functioning requires. it continuously monitors human behavior. System 2 requires effort and acts of self-control in which the intuitions and impulses of System 1 are overcome. most of our actions are controlled automatically by System 1.When we are awake. System 2 activates when System 1 cannot deal with a task–when more detailed processing is needed. These basic assessments are easily substituted for more difficult questions. System 1 is prone to biases and errors.9 System 1 forms basic assesments by continuously monitoring what is going on inside and outside the mind. which may be prevented by a deliberate intervention of System 2. System 2 is normally in a loweffort mode. Attention and Effort requires the lazy system 2 to act.

They use their judgement. Predicting results is based on the following facts: Results of large samples deserve more trust than smaller samples. which is commonly flawed.13 Random events by definition do not behave in a systematic fashion. but they can also lead to wrong conclusions (biases) because they sometimes substitute an easier question for the one asked. traditionally psychologists do not use calculations to decide on sample size. We first discuss the law of small numbers which basically states that researchers who pick too small a sample leave themselves at the mercy of sampling luck. answers to difficult questions. The effortful part of our mind is capable of doubt. This can lead to an illusion of causation. • Small samples yield extreme results more often than large samples do. the following two statements mean exactly the same thing: • large samples are more precise than small samples. . A type of heuristic is the halo effect–“the tendency to like (or dislike) everything about a person–including things you have not observed. Let us repeat the following result: “researchers who pick too small a sample leave themselves at the mercy of sampling luck”. though often imperfect. People are not adequately sensitive to sample-size. more significantly. The automatic part of our mind is not prone to doubt.”11 Heuristics. because it can maintain incompatible possibilities at the same time. But also. It suppresses ambiguity and spontaneously constructs stories that are as coherent as possible. but collection of random events do behave in a highly regular fashion..We first define Heuristics–“a simple procedure that helps find adequate. we know this as the law of large numbers. The strong bias toward believing that small samples closely resemble the population from which they are drawn is also part of a larger story: 11 12 13 Ibid-page 98 Ibid-page 82 Ibid-chapter 10 .”12 A simple example is rating a baseball player as good at pitching because he is handsome and athletic. allow humans to act fast.

an automatic manifestation of System 1. . Causal explanations of chance events are inevitably wrong. an operation of System 2. 14 Ibid-chapter 11. including accidents of sampling. and when we detect what appears to be a rule. The estimate for a number then stays close to the anchor. We do not expect to see regularity produced by a random process. two groups estimated Gandhi’s age when he died. Many facts of the world are due to chance. Our prelidiction for causal thinking exposes us to serious mistakes in evaluating the randomness of truly random events. Insufficient adjustment neatly explains why you are likely to drive too fast when you come off the highway into city streets-especially if you are talking with someone as you drive.we are prone to exaggerate the consistency and coherence of what we see. We are pattern seekers. People are influenced when they consider a particular value for an unknown number before estimating that number. For example. believers in a coherent world.14 Two different mechanisms produce anchoring effects-one for each system. we quickly reject the idea that the process is truly random. There is a form of anchoring that occurs in a deliberate process of adjustment. The law of small numbers is part of two larger stories about the workings of the mind. Random processes produce many sequences that convince people that the process is not random after all. • The exaggerated faith in small samples is only one example of a more general illusion-we pay more attention to the content of messages than to information about their reliability. The first group then estimated a higher number for when he died than the second one. Another example of a heuristic bias is when judgments are influenced by an uninformative number (an anchor). a second group was asked whether he was 35 or older. • Statistics produce many observations that appear to beg for causal explanations but do not lend themselves to such explanations. in which regularities appear not by accident but as a result of mechanical causality or of someone’s intention. And there is anchoring that occurs by a priming effect. The first group were initially asked whether he was more than 114. which results from an associative activation in System 1.

unless it is immediately rejected as a lie. either because their memory is loaded with digits or because they are slightly drunk. A key finding of anchoring research is that anchors that are obviously random can be just as effective as potentially informative anchors. A process that resembles suggestion is indeed at work in many situations: System 1 tries its best to construct a world in which the anchor is the true number. A strategy of deliberately “thinking the opposite” may be a good defense against anchoring effects.Adjustment is a deliberate attempt to find reasons to move away from the anchor: people who are instructed to shake their head when they hear the anchor. Suggestion and anchoring are both explained by the same automatic operation of System 1. because it negates the biased recruitment of thoughts that produces these effects. move farther from the anchor. Insufficient adjustment is a failure of a weak or lazy System 2. which selectively evokes compatible evidence. The gist of the message is the story. A message. Anchors clearly do not have their effects because people believe they are informative. Suggestion is a priming effect. will have the same effect on the associative system regardless of its reliability. System 1 understands sentences by trying to make them true. And of course there are quite a few people who are willing and able to exploit our gullibility. People adjust less (stay closer to the anchor) when their mental resources are depleted. Adjustment is an effortful operation. The psychological mechanisms that produce anchoring make us far more suggestible than most of us would want to be. which is based on whatever . System 2 is susceptible to the biasing influence of anchors that make some information easier to retrieve. as if they rejected it. sometimes to insufficient adjustment-are everywhere. Anchoring effects-sometimes due to priming. and the selective activation of compatible thoughts produces a family of systematic errors that make us gullible and prone to believe too strongly whatever we believe. and people who nod their head show enhanced anchoring.

or mere words. this heuristic is known to be both a deliberate problem solving strategy and an automatic operation. or statistics. Substitution of questions inevitably produces systematic errors. The powerful effect of random anchors is an extreme case of this phenomenon. • A dramatic event temporarily increases the availability of its category. pictures.: • A salient event that attracts your attention will be easily retrieved from memory. . One of the best-known studies of availability suggests that awareness of your own biases can contribute to peace in marriages. Whether the story is true. We now know the answer: none. even if the quantity of of the information is slight and its quality is poor. The concept of availability is the process of judging frequency by “ the ease with which instances come to mind. but tiresome.information is available. matters little. and probably in other joint projects. 15 Ibid-chapter 12. like other heuristics of judgement. or believable. substitutes one question for another: you wish to estimate the size of a category or the frequency of an event. Resisting this large collection of potential availability biases is possible. The ease with which instances comes to mind is a System 1 heuristic. if at all. by the environment of the moment. The availability heuristic. Anchoring results from associative activation.”. which is replaced by a focus on content when System 2 is more engaged. The main moral of priming research is that our thoughts and our behaviour are influenced. but you report an impression of the ease with which instances come to mind. • Personal experiences. much more than we know or want. because a random anchor obviously provides no information at all. and vivid examples are more available than incidents that happened to others.15 A question considered early was how many instances must be retrieved to get an impression of the ease with which they come to mind.

this has impacts on public policy. the importance of an idea is often judged by the fluency (and emotional charge ) with which that idea comes to mind. whether by individuals or governments. When they are in a good mood. Are or made to feel powerful. Esimates of causes of death are warped by media coverage. of course. Availability cascades are real and they undoubtedly distort priorities in the allocation of public resources. The coverage is itself biased towards novelty and poignancy. the memories of the disaster dim over time. The media do not just shape what the public is interested in. Notion of an affect heuristic was developed in which people make judgements and decisions by consulting their emotions: Do I like it? Do I hate it? How strongly do I feel about it? “The emotional tail wags the rational dog. If they are knowledgeable novices. are usually designed to be adequate to the worst disaster actually experienced.” The affect heuristic simplifies our lives by creating a world that is much tidier than reality. a particularly important concept is: the availability cascade. If they are depressed. we often face painful trade-offs between benefits and costs. The concept of an affect heuristic is one in which people make judgements and decisions by consulting their emotions. and so do worry and diligence.16 Availability effects help explain the pattern of insurance purchases and protective action after disasters.• • • • • • People who let themselves be guided by System 1 are more strongly susceptible to availability biases than others who are in a higher state of vigilance. Victims and near victims are very concerned after a disaster. . The following are some conditions in which people “go with the flow” and are affected more strongly by ease of retrieval than by the content they retrieved: When they are engaged in another effortful task. However. particularly with reference to the effect of the media. In the real world. Protective actions. but are also shaped by it. Faith in intuition. One perspective is offered by Cass Sunstein who would seek mechanisms that insulate decision makers 16 Ibid-chapter 13.

Judging probability by representativeness has important virtues: the intuitive impressions that it produces are often-indeed. . Representativeness involves ignoring both the base rates and the doubts about the veracity of the description. Paul Slovic on the other hand trusts the experts much less and the public somewhat more than Sunstein does. Logicians and statisticians have developed competing definitions of probability.17 In the absence of specific information about a subject. but anyone who ignores base rates and the quality of evidence in probability assessments will certainly make mistakes. because they do not try to judge probability as statisticians and philosophers use the word. usually-more accurate than chance guesses would be. you will go by the base rates. A question about probability or likelihood activates a mental shotgun. 17 Ibid-chapter 14. is an automatic activity of System1.from public pressures. evoking answers to easier questions. Although it is common. evoking answers to easier questions. This is a serious mistake. In contrast people who are asked to assess probability are not stumped. letting the allocation of resourcesa be determined by impartial experts who have a broad view of all risks and of the resources available to reduce them. Activation of association with a stereotype. It is entirely acceptable for judgements of similarity to be unaffected by the base rates and also by the possibility that the description was inaccurate. People who are asked to assess probability are not stumped. prediction by representativeness is not statistically optimal. all very precise. A question about probability or likelihood activates a mental shotgun. because judgements of similarity and probability are not constrained by the same logical rules. and he points out that insulating the experts from the emotions of the public produces policies that the public will reject-an impossible situation in a democracy. because they do not try to judge probability as statisticians and philosophers use the word.

In other situations. One sin of representativeness is an excessive willingness to predict the occurrence of unlikely (low base-rate) events. even in the presence of evidence about the case at hand. while the instruction to “think like a clinician” had the opposite effect. Others make the same mistake because they are not focussed on the task. Instructing people to “think like a statistician” enhanced the use of base rate information. So.Judging probability by representativeness has important virtues: the intuitive impressions that it produces are often-indeed. People without training in statistics are quite capable of using base rates in predictions under some conditions. To be useful your beliefs should be constrained by the logic of probability. Amos and I introduced the idea of a conjunction fallacy. The second is that intuitive impressions of the diagnosity of evidence are often exaggerated. 18 Ibid-chapter 15 . in general. This is often not intuitively obvious. The word fallacy is used. The first is that base rates matter. Some people ignore base rates because they believe them to be irrelevant in the presence of individual information. which people commit when they judge a conjunction of two events to be more probable than one of the events in a direct comparision. A conjunction fallacy is one which people commit when they judge a conjunction of two events to be more probable than one of the events in a direct comparision. when people fail to apply a logical rule that is obviously relevant. usually-more accurate than chance guesses would be. There are two ideas to keep in mind about Bayesian reasoning and how we tend to mess it up. there is a conflict between the intuition of representativeness and the logic of probability. The relevant “rules” for such cases are provided by Bayesian Statistics: the logic of how people should change their mind in the light of evidence. The second sin of representativeness is insensitivity to the quality of evidence.18 When you specify a possible event in greater detail you can only lower its probability. the stereotypes are false and the representativeness heuristic will mislead. especially if it causes people to neglect base-rate information that points in another direction.

Adding detail to scenarios makes them more persuasive. makes it easy to appreciate that one group is wholly included in the other. as it is known. in the sense that statistical base rates are generally underweighted and causal base rates are considered as information about the individual.The fallacy remains attractive even when you recognise it for what it is. The laziness of System 2 is an important fact of life.19 This chapter considers a standard problem of Bayesian inference. intuition often overcame logic even in joint evaluation. The solution to the puzzle appears to be that a question phrased as “how many?’ makes you think of individuals. in contrast. Less is more:sometimes even in joint evaluation: the scenario that is judged more probable is unquestionably more plausible. The frequency representation. You can probably guess what people do when faced wth this problem: they ignore the base rate and go with the witness. The blatant violations of the logic of probability that we had observed in transparent problems were interesting. The uncritical substitution of plausibility for probability has pernicious effects on judgements when scenarios are used as tools of forecasting. Causes trump statistics. and the observation that representativeness can block the application of an obvious logical rule is also of some interest. although we identified some conditions in which logic prevails. but less likely to come true. a more coherent fit with all that is known. In other problems. There are two items of information: a base rate and the imperfectly reliable testimony of a witness. 19 Ibid-chapter 16 . but the same question phrased as “ what percentage?” does not. A reference to a number of individuals brings a spatial representation to mind. Intuition governs judgments in the between-subjects condition:logic rules in joint evaluation.

Statistical base rates are facts about a population to which a case belongs. One of the bsic characteristics of System 1 is that it represents categories as norms and prototypical examplars.Now consider a variation of the same story. there are two causal stories that need to be combined or reconciled. In this version. In the second version. When the categories are social. These statements are readily interpreted as setting up a propensity in individual members of the group. The two versions of the problem are mathematically indistinguishable. when specific information about the case at hand is available. The example illustrates two types of base rates. Causal base rates change your view of how the individual case came to be. but in the authors usage it is neutral. Causal base rates are treated as information about the individual case and are easily combined with other case-specific information. in which only the presentation of the base rate has been altered. The Bayesian estimate is 41%. People who read the first version do not know how to use the base rate and often ignore it. The inferences from the two stories are contradictory and approximately cancel each other. and their average judgment is not too far from the Bayesian solution. reflecting the fact that the base rate is a little more extreme than the reliability of the witness. people who see the second version give considerable weight to the base rate. Stereotyping is a bad word in our culture. but they are not relevant to the individual case. which you apply to unknown individual observations. these representations are . you formed a stereotype. we hold in memory a representation of one or more “normal” members of each of these categories. and they fit in a causal story. and sometimes neglected altogether. The stereotype is easily fitted into a causal story. but they are psychologically quite different. The causal version of the cab problem had the form of a stereotype: Stereotypes are statements about the group that are (at least tentatively) accepted as facts about every member. in contrast. The two types of base-rate information are treated differently: Statistical base rates are generally underweighted. the base rate is a statistical fact. A mind that is hungry for causal stories finds nothing to chew on. In contrast. Why? In the first version.

Individuals feel relieved of responsibility when they know that others can take responsibility. however. You may note the irony. and hostile stereotyping can have dreadful consequences. People who are taught surprising statistical facts about human behaviour may be impressed . Respondents “quietly exempt themselves” (and their friends and acquaintances) from the conclusions of experiments that surprise them. has been highly been highly beneficial in creating a more civilised and more equal society. To teach students any psychology they did not know before. of course. but they had much less impact than the statistically equivalent causal base rates. including the opposition to profiling. merely statistical facts are more or less neglected. But when the students were surprised by individual cases-two nice people who had not helped-they immediately made the generalisation and inferred that helping is more difficult than they had thought. that neglecting valid stereotypes inevitably results in suboptimal judgements. It is tempting to conclude that we have reached a satisfactory conclusion:causal base rates are used. Some stereotypes are perniciously wrong. but it is weak in statistical reasoning. however. the versions are equivalent. decent people do not rush to help when they expect others to take on the unpleasantness of dealing with a seizure. both correct and false. which is also embedded in the law. For a Bayesian thinker. The explicitly stated base rates had some effects on judgment. however. But which surprise will do? When respondents were presented with a surprising statistical fact they managed to learn nothing at all. System 1 can deal with stories in which the elements are causally linked. It is useful to remember. shows that the situation is rather more complex. but the psychological facts cannot be avoided: stereotypes. are how we think of categories. Stereotyping improves the accuracy of judgement. In the context of the problem. and the reliance on causal base rates is desireable. The social norm against stereotyping. there is a strong social norm against stereotyping. you must surprise them. This is a profoundly important conclusion.called stereotypes. a failure of Bayesian reasoning. Even normal. In other contexts. The next study. the neglect of base-rate information is a cognitive flaw.

The point to remember is that the change from the first to the second occurrence does not need a causal explanation. Also regression to the mean has an explanation. is a measure of the relative weight of the factors they share. Regression does not have a causal explanation.20 An important principle of skill training:rewards for improved performance work better than punishment of mistakes. Regression to the mean. The feedback to which life exposes us is perverse. On the other hand. There is a deep gap between our thinking about statistics and our thinking about individual cases. we are statistically punished for being nice and rewarded for being nasty. The correlation coefficient between two measures. involves that poor performance is typically followed by improvement and good performance by deterioration. But even compelling causal statistics will not change long-held beliefs or beliefs rooted in personal experience. but does not have a cause. Statistical results with a causal interpretation have a stronger effect on our thinking than noncausal information. but this does not mean that their understanding of the world has really changed. Regression effects are ubiquitous.to the point of telling their friends about what they have heard. Regression inevitably occurs when the correlation between two measures is less than perfect. not whether you have learned a new fact. It is a mathematically inevitable consequence of the fact that luck played a role in the outcome of the first occurence. without any help from either praise or punishment. Because we tend to be nice to other people when they please us and nasty when they do not. Regression to the mean involves moving closer to the average than the earlier value of the variable observed. The general rule is straightforward 20 Ibid-chapter 17 . The test of learning psychology is whether your understanding of situations you encounter has changed. Correlation and regression are not two concepts-they are different perspectives on the same concept. which varies between 0 and 1. surprising individual cases have a powerful impact and are a more effective tool for teaching psychology because the incongruity must be resolved and embedded in a causal story. This proposition is supported by much evidence from research. and so are misguided casual stories to explain them.

” When our attention is called to an event. System 2 finds it difficult to understand and learn. Others involve intuition and System 1 in two main varieties.but has surprising consequences: whenever the correlation between two scores is imperfect. but they will be wrong because the truth is that regression to the mean has an explanation but does not have a cause. which is a feature of System 1. are influenced by a combination of analysis and intuition. Intuitive predictions need to be corrected because they are not based on regression to the mean and are therefore biased. arise from the operation of heuristics that often substitute an easy question for the harder one that was asked. rely largely on precise calculations. Some intuitions draw primarily on skill and expertise acquired by repeated experience. Causal explanations will be evoked when regression is detected. Our mind is strongly biased toward causal explanations and does not deal well with “mere statistics. and experienced scientists develop a healthy fear of the trap of unwarranted causal inference. Other intuitions. which are sometimes subjectively indistinguishable from the first. activation will automatically spread to any cause that is already stored in memory.21 Life presents us with many occasions to forecast. 21 Ibid-chapter 18. Of course. what you see is all there is applies:your associative memory quickly and automatically constructs the best possible story from the information available. We are capable of rejecting information as irrelevant or false. As a result intuitive predictions are almost completely insensitive to the actual predictive quality of the evidence. Correcting intuitive predictions are a task for system 2. Next the evidence is evaluated in relation to a relevant norm. Some predictive judgments. The next step involves substitution and intensity matching. there will be regression to the mean. but adjusting for smaller weaknesses in the evidence is not something that system 1 can do. especially in the professional domain. many judgements. associative memory will look for its cause-more precisely. When a link is found. This is due in part to the insistent demand for causal interpritations. Regression effects are a common source of trouble in research. .

Intensity matching yields predictions that are as extreme as the evidence on which they are based. The objections to the principle of moderating intuitive predictions must be taken seriously. estimate the baseline prediction. Intuitive predictions need to be corrected because they are not regressive and are therefore are biased. For a rational person. and evaluate the quality of the evidence. without noticing that the question they answer is not the one they were asked. The corrected intuitive predictions eliminate these biases. Significant effort is required to find the relevant reference category. The effort is justified only when the stakes are high and when you are particularly keen not to make mistakes. so that predictions (both high and low) are about equally likely to overestimate and to underestimate the true value. Correcting your intuitive predictions is a task for System 2. You should imagine a process of spreading activation that is initially prompted by the evidence and the question. they completely ignore regression to the mean. predictions that are unbiased and moderate should not present a problem. This is perhaps the best evidence we have for the role of substitution.The final step is a translation from an impression of the relative position of the candidates performation to the result. By now you should realise that all these operations are features of system 1. and eventually settles on the most coherent solution possible. but the errors are smaller and do not favour either high or low outcomes. feeds back upon itself. Extreme predictions and a willingness to predict rare events from weak evidence are both manifestations of System1. The prediction of the future is not distinguished from an evaluation of current evidence-prediction matches evaluation. It is natural for the associative machinery to match the extremeness of predictions to the . But there are situations in which one type of error is much worse than another. regardless of their direction. A preference for unbiased predictions is justified if all errors of prediction are treated alike. This process is guaranteed to generate predictions that are systematically biased. because absence of bias is not always what matters most. You will still make errors when your predictions are unbiased. People are asked for a prediction but they substitute an evaluation of the evidence.

it also seems a reasonable thing to do. Be warned: your intuitions will deliver predictions that are too extreme and you will be inclined to put far too much faith in them. is determined by the coherence of the best story you can tell from the evidence at hand. it will be given a causal interpretation that is almost always wrong. We will not learn to understand regression from experience.perceived extremeness of evidence on which it is based-this is how substitution works. Regression is also a problem for System 2. as we have seen. And it is natural for System1 to generate overconfident judgements. because confidence. Even when a regression is identified. Matching predictions to the evidence is not only something we do intuitively. . The very idea of regression is alien and difficult to communicate and comprehend.

Sign up to vote on this title
UsefulNot useful