You are on page 1of 13

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/273158544

Individual differences in cognitive biases: Evidence against one-factor theory


of rationality

Article  in  Intelligence · March 2015


DOI: 10.1016/j.intell.2015.02.008

CITATIONS READS

48 3,072

3 authors:

Predrag Teovanović Goran Knezevic


University of Belgrade University of Belgrade
32 PUBLICATIONS   125 CITATIONS    102 PUBLICATIONS   6,392 CITATIONS   

SEE PROFILE SEE PROFILE

Lazar Stankov
The University of Sydney
148 PUBLICATIONS   6,476 CITATIONS   

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Disintegration as a basic personality trait View project

Metacognitive self-monitoring View project

All content following this page was uploaded by Predrag Teovanović on 02 December 2019.

The user has requested enhancement of the downloaded file.


Intelligence 50 (2015) 75–86

Contents lists available at ScienceDirect

Intelligence

Individual differences in cognitive biases: Evidence against


one-factor theory of rationality☆
Predrag Teovanović a,⁎, Goran Knežević b, Lazar Stankov c
a
Faculty for Special Education and Rehabilitation, University of Belgrade, Visokog Stevana 2, 11000 Belgrade, Serbia
b
University of Belgrade, Serbia
c
Australian Catholic University, Australia

a r t i c l e i n f o a b s t r a c t

Article history: In this paper we seek to gain an improved understanding of the structure of cognitive biases and
Received 13 September 2014 their relationship with measures of intelligence and relevant non-cognitive constructs. We report
Received in revised form 19 January 2015 on the outcomes of a study based on a heterogeneous set of seven cognitive biases — anchoring
Accepted 24 February 2015 effect, belief bias, overconfidence bias, hindsight bias, base rate neglect, outcome bias and sunk
Available online xxxx
cost effect. New scales for the assessment of these biases were administered to 243 undergraduate
students along with measures of fluid (Gf) and crystallized (Gc) intelligence, a Cognitive
Keywords: Reflection Test (CRT), Openness/Intellect (O/I) scale and Need for Cognition (NFC) scale. The
Cognitive biases expected experimental results were confirmed — i.e., each normatively irrelevant variable
Rationality
significantly influenced participants' responses. Also, with the exception of hindsight bias, all
Judgment and decision making
cognitive biases showed satisfactory reliability estimates (αs N .70). However, correlations among
Intelligence
Factor analysis the cognitive bias measures were low (rs b .20). Although exploratory factor analysis produced
two factors, their robustness was doubtful. Cognitive bias measures were also relatively
independent (rs b .25) from the Gf, Gc, CRT, O/I and NFC and they define separate latent factors.
This pattern of results suggests that a major part of the reliable variance of cognitive bias tasks is
unique, and implies that a one-factor model of rational behavior is not plausible.
© 2015 Elsevier Inc. All rights reserved.

1. Introduction scientific and probabilistic reasoning (Stanovich, 2012) – may


reach .25 to .35 correlation with tests of intelligence. Although
Intelligence encompasses a very broad range of cognitive there is paucity of information about psychometric properties of
processes and empirical evidence for its generality is derived measures of rationality, Stankov (2013) stated that “… cognitive
from the presence of positive manifold and the finding of about measures based on studies of decision making and principles of
.30 average correlation between a large collection of cognitive scientific and probabilistic reasoning are perhaps the most
tests (see Carroll, 1993). Developments from the outside of interesting recent addition to the study of intelligence …”
individual difference tradition may lead to creation of new (p. 728). He also pointed out that since probabilistic reasoning
types of cognitive tasks that can enrich our understanding and scientific reasoning are known to be open to cognitive biases
of intelligence. A good example has been the study of working which, as we shall see shortly, do not always show correlations
memory (Baddeley & Hitch, 1974). Recently, Stankov (2013) has with intelligence, it is important to study cognitive as well as
pointed out that some measures of rationality – e.g., measures of non-cognitive aspects of the latter.
In this paper we examine factorial structure of rational
☆ This research was supported by the Ministry of Education, Science and
reasoning tasks used to assess seven cognitive biases and relate
Technological Development of the Republic of Serbia, Project No. 179018.
⁎ Corresponding author.
these to the well-known psychometric measures of intelligence
E-mail addresses: teovanovic@gmail.com, teovanovic@fasper.bg.ac.rs and aspects of personality and thinking dispositions. Two
(P. Teovanović). plausible outcomes can be anticipated. First, there may be

http://dx.doi.org/10.1016/j.intell.2015.02.008
0160-2896/© 2015 Elsevier Inc. All rights reserved.
76 P. Teovanović et al. / Intelligence 50 (2015) 75–86

sufficient evidence for communality among the bias measure 1.2. Correlates of cognitive biases
and one or more well-defined bias factors, correlated with tests
of intelligence, may arise. This would place cognitive bias Intelligence was undoubtedly the prime candidate for
measures well within the traditional understanding of intelli- predicting individual differences in cognitive biases. The initial
gence. Second, there may be poor support for the presence of findings of negative modest correlations of intelligence tests
either common factors or for the correlation of bias measures with belief bias, confirmation bias, base rate neglect, outcome
with tests of intelligence. While this outcome would not bias, overconfidence bias and hindsight bias were interpreted
necessarily place cognitive biases outside cognitive domain, as clues about the importance of algorithmic limitations in the
their standing would become restricted to a relatively narrow emergence of predictable fallacies (Stanovich & West, 1998,
domain of decision making. Under this latter scenario, cognitive 2000). However, some studies suggest that at least two
biases will have a status similar to some of the measures from cognitive biases included in the present study – anchoring
neuropsychology; they are employed to detect cognitive effect (Furnham, Boo, & McClelland, 2012; Stanovich & West,
deficits but are infrequently used in mainstream intelligence 2008) and sunk cost effect (Parker & Fischhoff, 2005; Stanovich
assessment. & West, 2008) – may not be related to cognitive ability
measures. As a result of the most comprehensive study on this
subject, Stanovich and West (2008) have provided lists of
1.1. Cognitive biases as departures from normative models cognitive biases that do and do not show association with
of rationality intelligence and have argued that the correlation should be
expected only when considerable cognitive capacity is required
Empirical research in the areas of judgment and decision in order to carry out the computation of a normatively correct
making, as well as memory and reasoning, has produced response to a bias task.
reliable evidences that the outcomes of cognitive processes Some other aspects of cognitive functioning are also related
often systematically depart from what is normatively predicted to cognitive biases. Previous research has shown that low
to be rational behavior. With the arrival of the heuristics and scores on the Cognitive Reflection Test (CRT), which was
biases research program in the early 1970s, these findings have devised as a measure of “the ability or disposition to resist
been referred to as cognitive biases1 (see Method section for reporting the response that first comes to mind” (Frederick,
example tasks) that arise as a consequence of heuristics, that is, 2005, p. 36), are related to probability overestimation (Albaity,
experience-based strategies that reduce complex cognitive Rahman, & Shahidul, 2014), conjunction fallacy (Hoppe &
tasks to simpler mental operations (Gilovich, Griffin, & Kusterer, 2011; Oechssler, Roider, & Schmitz, 2009) and
Kahneman, 2002; Kahneman, Slovic, & Tversky, 1982; Tversky impatience in time-preference judgment (Albaity et al., 2014;
& Kahneman, 1974). By producing many cognitive bias tasks Frederick, 2005). CRT is also related to performance on a broad
that depict circumstances under which relying on heuristics range of cognitive bias tasks and it has predictive validity over
leads to systematic violations of normative models, this and above intelligence (Toplak, West, & Stanovich, 2011, 2014).
program emphasized the conditions of predictable irrationality This is reminiscent of Stanovich's assertion that individual
(Ariely, 2009). differences in the detection of the need to override heuristic
On the other hand, proponents of ecological rationality have responses, that are assessed by the scales of Actively Open-
argued that rational behavior should not be defined with Minded Thinking and Need for Cognition (Stanovich, 2009,
respect to abstract normative standards, or – as Gigerenzer 2012; Stanovich & West, 2000, 2008), may be related to
(2004) puts it – “rationality is not logical, but ecological” cognitive biases. Similarly, it is plausible to assume that the
(p. 64). Within this paradigm, cognitive biases are not personality trait of Openness/Intellect, which is associated with
considered as errors of cognitive processing, but rather a result cognitive performance, may also account for variance in
of highly constrained and artificial experimental conditions performance on cognitive bias tasks.
since cognitive bias tasks diverge considerably from those in
the natural environment (Gigerenzer, 1996, 2004; Gigerenzer, 1.3. Relationships among cognitive biases
Hoffrage, & Kleinbölting, 1991; Hertwig, Fanselow, & Hoffrage,
2003; Hoffrage, Hertwig, & Gigerenzer, 2000). The other topic deals with the generality of individual
It was only in the late 1990s that researchers became differences in cognitive biases, and questions how these biases
cognizant of the considerable variability across participants on are related to each other. De Bruin, Parker, and Fischhoff (2007)
each of the cognitive bias tasks. After Stanovich and West's have stated that positive manifold among cognitive bias tasks
(1998, 2000) call for a debate about the role individual might indicate an underlying ability construct which they have
differences play in the deviation between the outcomes of termed the decision-making competence. Stanovich and West
cognitive process and those of normative models, a growing (1998) were the first to report significant positive correlations
body of correlational studies of cognitive biases emerged. Two among belief bias, base rate neglect and outcome bias
separate topics, which can be distinguished from the perspec- (Experiment 1), as well as between overconfidence and
tive of differential psychology, are briefly summarized in the hindsight bias (Experiment 4).
following sections. Subsequent studies have shown that reliability of composite
scores derived from a relatively large set of bias tasks is poor
(Toplak et al., 2011, 2014; West, Toplak, & Stanovich, 2008) and
1
Systematic departures from normative models are sometimes referred to as
that correlations among cognitive biases are only of modest
cognitive illusions (Pohl, 2004), thinking errors (Stanovich, 2009) and thinking strength (Klaczynski, 2001; Stanovich & West, 1998, 2000).
biases (Stanovich & West, 2008). Eventually, it became clear that it is possible to extract at least
P. Teovanović et al. / Intelligence 50 (2015) 75–86 77

two latent factors from the matrices of intercorrelations The second aim was to assess if correlations between
between various cognitive bias measures (De Bruin et al., cognitive bias measures are high enough to extract meaningful
2007; Parker & Fischhoff, 2005; Weaver & Stewart, 2012). common factor(s). However, a single latent factor of rationality
Taken together, these results indicate that a cognitive bias was not expected due to a pronounced theoretical and
space is multidimensional. methodological heterogeneity of the cognitive bias domain
A number of classifications of cognitive biases available in (Arnott, 2006; Carter et al., 2007; Haselton et al., 2005;
the literature today also suggest that the population of Kahneman & Frederick, 2005; Pohl, 2004; Stanovich, 2003,
cognitive biases is heterogeneous. Conceptually, cognitive 2009). Furthermore, the reported degrees of common variance
biases differ with respect to normative models they violate. among individual cognitive biases in previous studies do not
From a theoretical point of view, biases can be distinguished provide evidence for a strong unidimensional construct of
with regard to cognitive processes they tap (Pohl, 2004; rationality (De Bruin et al., 2007; Klaczynski, 2001; Parker &
Stanovich, 2009), whether they are considered as conse- Fischhoff, 2005; Stanovich & West, 1998, 2000; Toplak et al.,
quences of heuristics, artificial procedures, biased error man- 2011, 2014; Weaver & Stewart, 2012; West et al., 2008).
agement (Haselton, Nettle, & Andrews, 2005), selective The third aim was to estimate correlations of cognitive
attention, motivation or psychophysical distortions (Baron, biases with measures that address well-established constructs
2008). Similar points were made by other investigators (see such as fluid and crystallized intelligence, openness and need
Arnott, 2006; Carter, Kaufmann, & Michel, 2007; Stanovich, for cognition, as well as cognitive reflection.
2003). From the methodological point of view, performance on
cognitive bias tasks can be evaluated in terms of consistency, by 2. Method
comparing related responses, or in terms of accuracy, relative
to external criteria (De Bruin et al., 2007; Parker & Fischhoff, 2.1. Participants
2005). This corresponds to Kahneman's distinction between
coherence rationality and reasoning rationality (Kahneman, The participants were 243 undergraduate students (22 males)
2000; Kahneman & Frederick, 2005). who attended first-year Introductory Methodology course in
It is important to note that empirical classification of the Faculty of Special Education and Rehabilitation at the
cognitive biases can also be based on individual differences in University of Belgrade. Their mean age was 19.83 (SD = 1.31).
information obtained from multiple tasks and measures. Participants gave their informed consent before taking part in
Although some previous studies in this field did employ factor the study.
analysis (De Bruin et al., 2007; Klaczynski, 2001; Liberali,
Reyna, Furlan, Stein, & Pardo, 2012; Parker & Fischhoff, 2005; 2.2. Measures of cognitive biases
Weaver & Stewart, 2012), their primary aims were not to
identify latent dimensions of the cognitive bias domain. For the 2.2.1. Anchoring Effect
purposes of the study presented here, multi-item instruments Anchoring effect refers to a systematic influence of initially
for the measurement of individual differences in seven presented values on numerical judgments. A simple two-step
cognitive biases were devised and administered along with procedure for elicitation of this phenomenon, referred to as a
measures of intelligence and relevant non-cognitive constructs. standard paradigm of anchoring (Epley & Gilovich, 2001; Strack
These cognitive biases were selected with the intention to & Mussweiler, 1997), was firstly presented in the seminal work
increase the level of sample representativeness by covering a of Tversky and Kahneman (1974). In the first step of this
variety of categories in alternative taxonomies (see Table 1). procedure, subjects are asked to make a judgment if a certain
quantity, such as percentage of African countries in the United
1.4. Aims Nations, is higher or lower than the number that is randomly
generated, e.g., by spinning a wheel of fortune (comparative
This study had three aims. The first was to estimate the task). In the second step, participants are asked to provide their
reliability of cognitive bias measures. Significant effects of numerical estimates for the very same quantity (estimation
normatively irrelevant variables would confirm experimental task). The results showed notable effects of arbitrary numbers
reliability of examined cognitive biases, while satisfying levels on participants' estimates (e.g., percentage of African countries
of internal consistency would indicate that individual differ- in the United Nations was estimated to be 25 and 45 for groups
ences in cognitive biases could be precisely measured. that were presented with anchors 10 and 65, respectively).

Table 1
Alternative classifications of cognitive biases.

Cognitive bias Normative model Pohl Stanovich Baron Arnott Carter et al. De Bruin et al.
(2004) (2009) (2008) (2006) (2007) (2007)

Anchoring effect Coherence Judgment Focal bias Availability Adjustment Reference point Consistency
Belief bias Logic Thinking Override failure Availability / Confirmatory Accuracy
Overconfidence bias Calibration Judgment Egocentric processing Motivational bias Confidence Control illusion Accuracy
Hindsight bias Coherence Memory Override failure Imperfect correlation Memory Output evaluation Consistency
Base rate neglect Probability theory Thinking Mindware gap Focus on one attribute Statistical Base rate Accuracy
Sunk cost effect Utility theory Judgment / Imperfect correlation Situation Commitment Accuracy
Outcome bias Coherence Judgment Override failure Imperfect correlation / Confirmatory Consistency
78 P. Teovanović et al. / Intelligence 50 (2015) 75–86

Subsequent research revealed that anchoring effect is “one of invalid and unbelievable (e.g., “All trolley buses use electricity.
the most reliable and robust results of experimental psychol- Boilers use electricity. Therefore, trolley buses are boilers.”).
ogy” (Kahneman, 2011, p. 119; for a recent review, see Participants were instructed to evaluate syllogisms by indicat-
Furnham & Boo, 2011). It was also reported that the size of ing whether the conclusion necessarily follows from the
the anchoring effect depends on the relative anchor distance premises or not, assuming that all premises are true. Although
(Hardt & Pohl, 2003; Strack & Mussweiler, 1997) and that it previous studies had used a number of incorrect responses to
most probably follows the inverted-U curve — i.e., moderate inconsistent items as a measure of belief bias (e.g., Klaczynski &
anchors produce more pronounced anchoring effects than Daniel, 2005; Stanovich & West, 1998), we have employed a
extremely low and extremely high anchors (Wegener, Petty, slightly different scoring strategy. This was done in order to
Detweiler-Bedell, & Jarvis, 2001). Although elegant, the obtain more reliable measures of individual differences in
standard paradigm of anchoring lacks information about susceptibility to belief bias, i.e., to control for random variance
anchor-free estimates for each participant, and therefore it is that was expected considering the dichotomous response
not suitable either for the collection of anchoring effect format. The responses to each pair of items were scored as
measures or for the control of anchor distance at the biased if, and only if, the participant indicated a correct answer
individual's level. For these reasons, we employ an extended on a consistent item and an incorrect answer on a correspond-
procedure. In the present study we introduce the anchor-free ing inconsistent item. In other words, incorrect responses on
estimates prior to assessing the anchoring effect proper. consistent items were treated as indicators of possible random
Specifically, 24 general knowledge questions (e.g., “What is the responding to counterpart inconsistent items that exploit the
number of African countries in the United Nations?”, “What is same formal argument structure.2
the highest temperature ever recorded?”, “How many castles are
there in Vojvodina?”) were presented on a computer screen in 2.2.3. Overconfidence Bias
the initial phase, one at a time, and participants were instructed Overconfidence bias is a common inclination of people to
to state their estimates (E1) by using a numeric keypad. In the overestimate their own abilities to successfully perform a
next phase, which followed immediately, the standard paradigm particular task (Brenner, Koehler, Liberman, & Tversky, 1996).
of anchoring was applied and participants were instructed to In order to avoid experimental dependency that allows
answer the same set of questions with an option to amend their alternative interpretation of association between measures of
initial responses. For each of the 24 questions, participants were confidence and intelligence as artificial (Ackerman, Beier, &
administered a comparative task and a final estimation task. In Bowen, 2002), it has been recommended to obtain these
the comparative task, participants were required to indicate if measures by using different instruments (Pallier et al., 2002). In
their final response was higher or lower than the value of anchor this study, confidence rating scales were attached to the 21
(A). Anchors were set automatically by multiplying anchor-free items of the Letter Counting test (Stankov, 1994). Participants
estimates (E1) with predetermined values (that ranged from 0.2 were required to count how many times target letters occurred
to 1.8 between questions). Afterwards, on the estimation task, while different combinations of letters were displayed serially
participants stated their final response (E2) by using the numeric on the screen, 1 s apart. After responding to each of the items,
keypad. On each question, a bias score was calculated as a participants were instructed to assess their degree of confi-
difference between the two estimates (E2 − E1), divided by a dence in that answer on an 11-point rating scale, ranging from
distance of anchor from the initial estimate (A − E1). The scores 0% to 100% in steps of 10%. Difference between the confidence
that had been lower than 0 (lack of anchoring) and higher than 1 score, expressed as the mean percentage of confidence
(total anchoring) were brought to boundaries of acceptable judgment across all the items in the test, and the accuracy
range (16.41% of all scores). The average bias score for the 24 score, expressed as the percentage of correct answers, was used
questions was used as a measure of susceptibility to anchoring as a measure of calibration (Stankov, 1999). Greater absolute
effect. values pointed to higher miscalibration, with positive values
indicating overconfidence and negative values indicating
2.2.2. Belief Bias underconfidence. However, only 9 participants (3.7%) had a
Belief bias is a predictable tendency to evaluate deductive negative calibration score, while the vast majority showed a
arguments on the basis of believability of conclusion, rather strong overconfidence bias (72.8% participants had calibration
than on the basis of logical validity (Evans, Barston, & Pollard, scores higher than 20).
1983). Eight pairs of syllogistic reasoning problems, some of
those based on examples from the previous research
2.2.4. Hindsight Bias
(Markovits & Nantel, 1989; Stanovich & West, 1998), were
Hindsight bias is a propensity to perceive events as being
used in the present study. On eight inconsistent items, logical
more predictable, once they have occurred (Fischhoff, 1975).
validity of argument and empirical status of conclusion were in
In other words, judgments made with the benefit of informa-
conflict. Four of them were valid, but unbelievable (e.g., “All
tion about the outcome of an event differ systematically from
mammals walk. Whales are mammals. Therefore, whales
judgments made without such knowledge. However,
walk.”) and four were invalid, but believable (e.g., “All fish
meta-analytically estimated overall effect size of outcome
have gills. Catfish has gills. Therefore, catfish is a fish.”). In the
information is small (Christensen-Szalanski & Willham, 1991;
wording of eight consistent items, that had the same formal
argument structure as their inconsistent counterparts, logic 2
Incorrect responses on consistent items were ascribed to random sources of
was congruent with the believability of the conclusion. Four of variance (lack of attention, temporary memory deactivation, distraction, etc.)
them were both valid and believable (e.g., “All men are mortal. since it could not be assumed that these judgments were based either on a
I am a man. Therefore, I am mortal.”) and four were both priori beliefs or on logical validity, simply because these two were consistent.
P. Teovanović et al. / Intelligence 50 (2015) 75–86 79

Guilbault, Bryant, Brockway, & Posavac, 2004) as is the internal 2.2.7. Outcome Bias
consistency of hindsight bias measures (Pohl, 1999, according Outcome bias is the tendency to judge the quality of a
to Musch, 2003). Hindsight bias is usually regarded as a hard- decision based on the information about the outcome of that
to-avoid consequence of information processing and storage. decision. These judgments are erroneous in respect to norma-
Outcome information irreversibly changes knowledge repre- tive assumption that “information that is available only after
sentation (Hoffrage et al., 2000), or it serves as an anchor when decision is made is irrelevant to the quality of the decision”
trying to reconstruct forgotten estimates (Hardt & Pohl, 2003). (Baron & Hershey, 1988, p. 569). In this study, within-subject
In this study, within-subject memory/recall design was design was employed. In the first phase, participants were
employed. In the first phase, participants were instructed to presented with descriptions of 10 decisions made by others
provide an answer to a set of 14 questions (e.g., to find the under uncertain conditions (e.g., “Two seconds before the end
exception in a set such as “cow, wolf, cat, donkey, deer”), and of the basketball match, with the score 79 to 77 in favor of the
also to assess their answer on a confidence rating scale. In the opponent, a player of the Serbian national basketball team
second phase, participants were instructed to recall their initial decided to make a shoot that would earn three points”), and
confidence rating immediately after receiving feedback for the also with the outcome of this decision, that was either positive
specific answer. Responses were coded as hindsighted if (e.g., “… and he scored”) or negative (e.g., “… and he missed”).
the participants answered by lowering their confidence after Participants' task was to judge the quality of decision by
negative feedback and raising their confidence after positive indicating whether the decision maker should pursue the same
feedback. Bias score was expressed as a proportion of choice in similar situations on a rating scale ranging from 1
hindsighted responses. (absolutely not) to 6 (absolutely yes). A week later, a parallel
form of the questionnaire was administered, with the same 10
2.2.5. Sunk Cost Effect decisions but with different outcomes: if the outcome was
Sunk cost effect refers to the tendency to “continue an positive in the first test, it was negative in the second, and vice
endeavor once an investment of money, effort, or time has been versa. The answering scale was the same. Pairs of items were
made” (Arkes & Blumer, 1985, p. 124). Eight hypothetical compared and the measure of outcome bias was calculated as
scenarios were used to describe various situations in which the the difference between ratings of the same decisions with the
decision maker had to choose between an option connected to positive and the negative outcome.
unrecoverable past expenditure (sunk-cost option) and an
option that was worded as more beneficial (normative option). 2.3. Cognitive ability measures
Most of the tasks were based on examples from the existing
literature (De Bruin et al., 2007; Thaler, 1980), but were 2.3.1. Raven's Progressive Matrices (RPM) test
modified to engage undergraduate students (e.g., “It is the last Raven's Progressive Matrices (RPM) test is the well-known
day of your summer vacation. For that day, you have previously marker of fluid intelligence. A total of 18 items were drawn
paid for a boat-excursion to a nearby island. Suddenly, you are from both standard and advanced versions of the Raven's
offered a dream-activity for free: one hour of scuba diving with Progressive Matrices test (Raven, Court, & Raven, 1979). This
an instructor.”). Participants were instructed to make a choice version of instrument was previously used in Pallier et al.
between two options by using a rating scale that ranged from 1 (2002). For each task, participants were asked to identify the
(most likely to choose the normatively correct option) to 6 missing symbol that logically completes the 3 × 3 matrix by
(most likely to choose sunk-cost option). Average rating scores choosing from among five alternatives. Participants were
were used as measures of susceptibility to sunk cost effect. allowed 6 min to complete the test.

2.2.6. Base Rate Neglect 2.3.2. Swaps Test


Base rate neglect refers to a tendency to ignore statistical The original procedure proposed by Stankov (2000) was
information of prior probabilities in favor of the specific adapted by using visual symbols (e.g., bear, car, and hat images)
evidence concerning the individual case (Kahneman & instead of letters. Three symbols were presented in a line on the
Tversky, 1973). For each of 10 causal base rate problems in computer screen and followed by verbal instructions to
the study, participants were presented with two kinds of mentally switch the position of two of them. A total of 20
information: (a) base-rates, i.e., background information (e.g., items differed in the number of such instructions, with a
“The probability of winning a lottery ticket is 10%.”) and minimum of one and a maximum of four required swaps that
(b) information concerning the specific case (e.g., “John, who were serially displayed on the screen. Participants were asked
was wearing his lucky shirt and who had found a four-leaf to indicate the current arrangement of the three symbols after
clover that day, decided to buy one lottery ticket.”). Participants all the swaps had been completed by choosing their answer
were required to estimate the probability of the criterion among five alternatives.
behavior (e.g., “What is the probability that John bought a
winning ticket?”) on an 11-point percentage scale. Individual 2.3.3. Three-Dimensional Space Test (Wolf, Momirović, &
cases were described by using information normatively Džamonja, 1992)
irrelevant in respect to the criterion behavior, but which were This 39-item test is an adapted version of an instrument
supposed to cue stereotypical beliefs. The base rate neglect that was a part of the General Aptitude Test Battery, firstly
score was expressed as the proportion of responses that published by the U.S. Employment Service in 1947. In each task,
differed from the base rate information in the direction implied participants were presented with a two-dimensional stimulus
by the specific information (e.g., higher that 10% in the above figure that contained dotted lines to indicate where the fold can
example). be made. They were instructed to select which one of the four
80 P. Teovanović et al. / Intelligence 50 (2015) 75–86

three-dimensional figures can be made by folding the stimulus Table 2


figure. A time limit of 10 min was imposed. Descriptive statistics for cognitive bias tasks.

Cognitive bias Normative Mean Standard Cohen's d


2.3.4. Vocabulary Test value deviation
Vocabulary Test (Knežević & Opačić, 2011) measures verbal Anchoring effect 0 0.44 0.18 2.05
knowledge and concept information. The test consists of 56 Belief bias 0 4.05 2.34 2.05
items of increasing difficulty. Subjects were required to define Overconfidence bias 0 32.56 18.99 1.71
Hindsight bias 0 0.33 0.20 0.24
the words (e.g., “vignette”, “credo”, “isle”) by choosing the
Base rate neglect 0 0.78 0.22 3.58
answer from among six alternatives. There was no time limit Sunk cost effect 1 3.16 1.06 2.04
for the completion of this test. On average, the participants Outcome bias 0 1.55 0.91 2.43
completed this test in 13 min (SD = 2.03). Note. For cognitive biases that were defined in respect to consistency criterion
(anchoring effect, hindsight bias and outcome bias), within-subject designs were
2.3.5. Analogies Test employed with normatively irrelevant variables as experimental factors and
Analogies Test (Wolf et al., 1992) is a measure of analytical Cohen's ds were calculated by using standard formula d = (M1 − M2) / SD. For
cognitive biases that were defined with respect to the accuracy criterion, Cohen's
reasoning through the use of partial analogies. The test consists
ds were calculated by the formula d = (M − μ) / SD, where μ is normative
of 39 relatively easy multiple choice items, with four alterna- value.
tives. Each item had a stem (e.g., “food: man = fuel: ?”) and
participants were asked to respond by choosing their answer
from among four alternatives (e.g., “oil, automobile, gas,
was administered to participants with instructions to rate the
wheel”). A time limit of 2 min was imposed.
extent to which they agree with each statement (e.g., “I prefer
my life to be filled with puzzles that I must solve.”).
2.3.6. Synonyms–Antonyms Test
Synonyms–Antonyms Test (Wolf et al., 1992) consists of 40
items, each item representing a pair of words which are either 2.5. Procedure
synonyms or antonyms (e.g., “black–white”, “thin–fat”). Sub-
jects were asked to judge which of the two possible relations All measures were computer administered in two sessions,
(i.e., either synonym or antonym) is present. Time was limited one week apart. Overall testing time was about 60 min per
to 2 min. session, depending on the individual's speed of responding. In
the first session, the participants completed a battery of six
2.3.7. Cognitive Reflection Test cognitive ability tests and the first part of the outcome bias
Cognitive Reflection Test (CRT; Frederick, 2005) is com- questionnaire. In the second session, Need For Cognition and
posed of three questions that trigger most participants to Intellect/Openness scales were administered, along with the
answer immediately and incorrectly. For example, to a question Cognitive Reflection Test, the second part of the outcome bias
“If it takes 5 machines 5 min to make 5 widgets, how long will it questionnaire, and the instruments for measuring individual
take 100 machines to make 100 widgets?” as much as 78.1% of differences in six other cognitive biases.
participants answered “100 min”, although the correct an-
swers is “5 min”. 3. Results

2.4. Non-cognitive measures 3.1. Reliability and discriminative ability of cognitive


bias measures
2.4.1. Openness/Intellect
Openness/Intellect (O/I) is an empirically derived dimen- Descriptive statistics for the seven cognitive biases' mea-
sion of personality. Its “central characteristic (is) the disposi- sures are presented in Table 2, with higher scores indicating a
tion to detect, explore and utilize abstract and sensory more pronounced bias. The results confirm experimental
information” (DeYoung, 2011, p. 717). Among the Big Five reliability of examined cognitive bias phenomena. Mean bias
personality traits, O/I is the most closely correlated with scores deviated from normative values ranging from 1.64
intelligence (Ackerman & Heggestad, 1997). For a quick (for hindsight bias) to 3.58 (for base rate neglect) standard
assessment of this trait, 12 items from NEO-FFI (McCrae & deviations, thus revealing large effect sizes of normatively
Costa, 2004) were used, each of which was to be rated by the irrelevant variables according to Cohen's conventions (Cohen,
participants on a five-point scale according to how well it 1992).
described themselves (e.g., “Sometimes when I am reading Results of the Kolmogorov–Smirnov test, displayed in
poetry or looking at a work of art, I feel a chill or wave of Table 3, show that scores were approximately normally
excitement.”). distributed for measures of anchoring effect, overconfidence
bias and sunk cost effect. Outcome bias scores were positively
2.4.2. Need for Cognition skewed, while the distribution of base-rate neglect scores was
Need for Cognition (NFC) represents a measure of cognitive negatively skewed and also leptokurtic. Platykurticity was
style that reflects “the tendency for an individual to engage in observed for belief bias and hindsight bias measures.3
and enjoy thinking” (Cacioppo & Petty, 1982, p. 116). People
high on need for cognition are more analytical in their thinking 3
Several corrections for departures from normality were applied and their
strategies and generally more thoughtful. In this study, an 18- effects were examined. Since these manipulations did not affect subsequent
item Need for Cognition scale (Cacioppo, Petty, & Kao, 1984) analysis, they are not reported here.
P. Teovanović et al. / Intelligence 50 (2015) 75–86 81

Table 3
Discriminative properties and internal consistency of cognitive bias measures.

Cognitive bias Range Normality deviations KS test Reliability

Potential Observed Skewness Kurtosis Z p N Alpha

Anchoring effect 0–1 .06–.92 0.08 −0.18 0.78 .62 24 .77


Belief bias 0–8 0–8 −0.12 −1.12 2.05 b.001 8 .76
Overconfidence bias −100 to 100 −20 to 76 −0.22 −0.19 0.76 .61 7 .94
Hindsight bias 0–1 .00–.86 0.12 −0.73 1.56 .02 14 .66
Base rate neglect 0–1 0–1 −1.33 1.64 3.14 .00 10 .71
Sunk cost effect 1–6 1–6 0.09 −0.33 0.67 .76 8 .76
Outcome bias −5 to 5 −0.60 to 4.60 0.74 0.31 1.66 .01 10 .83

Note. Standard errors of skewness and kurtosis were 0.16 and 0.31, respectively. Cronbach's alpha for overconfidence bias was calculated according to the seven testlets,
each consisting of the differences between confidence and accuracy on three subsequent items. KS — Kolmogorov–Smirnov; N — number of items, i.e., testlets.

However, scores on each measure span most of the theoretical high on the second factor were more prone to change their
range, showing the variability necessary for assessing reliability initial answers toward subsequently given external informa-
in terms of internal consistency. Cronbach's alpha was above tion (positive loadings on anchoring effect and hindsight bias),
.70 for all measures, except hindsight bias, thus indicating a but less prone to overrate perceptions of their own abilities
satisfying level of reliability for cognitive bias measures. Our (negative loading on overconfidence bias). Since responsive-
results also indicate that the criterion for scoring the responses ness to feedback and well calibrated confidence judgments
on the belief bias instrument that was described in the Method are a part of the description of ecologically rational agents
section produces a more reliable measure (α = .76) than the (Gigerenzer et al., 1991; Hertwig et al., 2003; Hoffrage et al.,
procedure that takes into account only the responses on 2000), the second factor could be interpreted as a factor of
inconsistent syllogistic reasoning tasks (α = .59). ecological rationality. However, given relatively low loadings of
cognitive bias measures on these two factors and small
percentage of total variance they account for, their replicability
3.2. Correlations among cognitive bias measures may be low.

Table 4 shows bivariate correlations between cognitive bias 3.3. Correlation between biases and measures of intelligence and
measures. All 21 correlation coefficients were low (rs b . 20), non-cognitive constructs
and only seven were statistically significant (ps b .05). Never-
theless, each measure correlated with at least one other The upper half of Table 6 contains coefficients of correla-
measure, while sunk cost had four significant correlations. tions between cognitive bias measures and other measures
In addition, the results of Bartlett's test of sphericity (χ2(1) = collected in this study. Overall, the correlations are low. The
61.54, p b .001) and Kaiser–Meyer–Olkin's measure of sam- highest correlation is r = −.24 between Raven's Progressive
pling adequacy (KMO = .56) indicate that the minimum Matrices test and the base rate neglect bias.
criteria for factor analysis were met. Descriptive statistics, reliability indices and correlations
To examine factor structure, exploratory factor analysis of between non-bias measures are presented in the lower part of
seven cognitive bias measures was carried out. Two factors Table 6. All these measures show fair reliability (αs N .70), with
were retained on the basis of parallel analysis results.4 the exception of the Cognitive Reflection Test which consists of
However, the maximum likelihood method of factor extraction only three items. Strong correlation was obtained between the
shows that these two factors have low loadings from bias Need for Cognition and Openness/Intellect (r = .58, p b .001).
measures and account for a small amount (i.e., only 16.92%) of Clusters of high associations were detected for Vocabulary,
the total variance. These low values reflect low correlations tests of Analogies and Synonyms–Antonyms (rs ranged from
among cognitive biases. Factors were rotated using the direct .39 to .57) and also for Raven's Progressive Matrices, 3D Space
oblimin method and the results are presented in Table 5. and Swaps tests (rs were in the range from .40 to .52).
Correlation between the two factors is r = −.15. Performance on the Cognitive Reflection Test was poor, with
Although factors cannot be distinguished in respect to any
of the alternative taxonomies presented in Table 1, the working
interpretations of these factors can be based on the main Table 4
theoretical accounts of rational behavior. The first factor has Correlations between cognitive bias measures.

loadings of at least .30 on belief bias and outcome bias. Higher Cognitive bias 1 2 3 4 5 6 7
scores on this factor indicate higher rates of predictably 1. Anchoring effect –
irrational responses, so it could be interpreted as a factor of 2. Belief bias −.09 –
normative irrationality. On the other hand, individuals scoring 3. Overconfidence bias −.09 .03 –
4. Hindsight bias .18⁎⁎ .04 –.18⁎⁎ –
4 5. Base rate neglect .12 .09 −.01 .12 –
Since parallel analyses (Horn, 1965) of the adjusted correlation matrices
6. Sunk cost effect .16⁎ .18⁎⁎ .10 .15⁎⁎ .17⁎⁎ –
tend to indicate more factors than warranted (Buja & Eyuboglu, 1992),
7. Outcome bias −.01 .20⁎⁎ .04 .07 .06 .06 –
principal component eigenvalues were used to determine the number of
common factors. The third principal component was not retained due to its Note. All p values represent two-sided tests.
eigenvalue (λ3 = 1.04) being smaller than the critical value of 1.11 (O'Connor, ⁎ p b .05.
2000). ⁎⁎ p b .01.
82 P. Teovanović et al. / Intelligence 50 (2015) 75–86

Table 5 Table 7
Factor pattern matrix for seven cognitive bias measures. Factor pattern matrix for nine non-bias measures.

Cognitive bias Normative irrationality Ecological rationality Variable Gf Gc TD

Anchoring effect .14 .31 Raven's Matrices .73 .04 −.01


Belief bias .34 −.05 Swaps Test .58 .10 −.07
Overconfidence bias .25 −.42 3D Space Test .74 −.08 .00
Hindsight bias .12 .50 Vocabulary Test .07 .48 .08
Base rate neglect .26 .15 Synonyms–Antonyms −.06 .77 −.03
Sunk cost effect .22 −.01 Test of Analogies .02 .78 −.01
Outcome bias .54 .08 Cognitive Reflection .39 −.02 .06
Need for Cognition .03 .03 .58
Note. Extraction method: maximum likelihood. Rotation method: direct oblimin.
Openness/Intellect −.05 −.01 .99
Loadings above .30 are in bold font.
Note. Extraction method: maximum likelihood. Rotation method: direct oblimin.
Loadings above .40 are in bold font. TD — thinking dispositions.
only 6.67% correct answers per participant, and is moderately
correlated with performance on intelligence tests. In order to find out if cognitive bias measures have
Three latent factors were extracted from the matrix of something in common with cognitive abilities and non-
intercorrelations among non-bias measures, i.e., cognitive cognitive measures, factor analysis – maximum likelihood
abilities and non-cognitive traits (variables 8 to 16 in the extraction and direct oblimin rotation – that encompassed all
middle of Table 6). The outcome of this analysis is presented in 16 variables was performed. This resulted in five factors (see
Table 7. The first was a factor of fluid intelligence (Gf) that had Table 8). The first three factors highly corresponded to factors
high loadings from three non-verbal cognitive ability tests – extracted in the previous analysis (Gf, Gc and TD). However,
Raven's Progressive Matrices, Swaps and 3D Space tests – and overconfidence bias also loaded negatively, albeit moderately
only moderate loading from the Cognitive Reflection Test. on the factor of fluid intelligence, thus indicating that higher
Three verbal ability tests – Vocabulary Test, Synonyms– ability participants were to some extent less prone to
Antonyms, and Analogies test – were loaded on the second overestimate their own abilities. The last two factors were
factor, which can be interpreted as a factor of crystallized defined by the cognitive bias measures. Bias1 has loadings from
intelligence (Gc). Finally, the third factor has loadings from belief bias and outcome bias measures. Bias2 has loadings from
Openness/Intellect and Need for Cognition scales, and it can be anchoring effect, hindsight bias, base rate neglect and sunk cost
seen as a thinking dispositions (TD) factor. Results also indicate effect.
that these factors were related — Gf correlated moderately with Factor intercorrelation matrix is presented in Table 9.
both Gc (r = .40) and TD (r = .32), while the correlations Clearly, all correlations between the factors are lower than .25
between last two was of relatively small intensity (r = .22). indicating that the five factors are essentially uncorrelated.

Table 6
Correlations between biases and measures of intelligence and non-cognitive constructs.

8 9 10 11 12 13 14 15 16

Cognitive bias measures


1. Anchoring effect .03 −.08 .02 −.02 −.11 −.03 −.02 .10 .21⁎⁎
2. Belief bias −.15⁎ −.22⁎⁎ −.19⁎⁎ −.10 −.12 −.19⁎⁎ −.17⁎⁎ .01 .01
3. Overconfidence bias −.21⁎⁎ −.15⁎ −.14⁎ −.08 −.00 .02 −.07 .10 .05
4. Hindsight bias .05 .06 −.01 −.05 −.08 −.10 −.06 −.05 −.02
5. Base rate neglect −.24⁎⁎ −.20⁎⁎ −.21⁎⁎ −.16⁎ −.18⁎⁎ −.13⁎ −.22⁎⁎ −.10 −.10
6. Sunk cost effect −.12 −.09 −.14⁎ −.06 −.09 −.04 −.22⁎⁎ −.07 −.11
7. Outcome bias −.17⁎⁎ −.06 −.13 .00 −.01 −.21⁎⁎ −.12 −.04 −.01

Cognitive ability tests


8. Raven's Matrices –
9. Swaps Test .48⁎⁎ –
10. 3D Space Test .52⁎⁎ .40⁎⁎ –
11. Vocabulary Test .26⁎⁎ .11 .21⁎⁎ –
12. Synonyms–Antonyms .19⁎⁎ .21⁎⁎ .11 .39⁎⁎ –
13. Test of Analogies .26⁎⁎ .28⁎⁎ .16⁎ .42⁎⁎ .57⁎⁎ –
14. Cognitive Reflection .25⁎⁎ .24⁎⁎ .35⁎⁎ .10 .07 .14⁎ –

Non-cognitive tests
15. Need for Cognition .19⁎⁎ .12 .18⁎⁎ .11 .11 .14⁎ .16⁎ –
16. Openness/Intellect .22⁎⁎ .13⁎ .21⁎⁎ .19⁎⁎ .09 .15⁎ .17⁎ .58⁎⁎ –
Mean 12.77 13.67 22.99 10.46 12.60 7.69 0.20 3.16 3.26
Standard deviation 3.16 3.84 6.58 3.21 2.10 1.48 0.50 0.62 0.53
Number of items 18 20 39 56 40 39 3 18 12
Cronbach's α .78 .79 .79 .73 .75 .72 .39 .85 .83

Note. All p values represent two-sided tests.


⁎ p b .05.
⁎⁎ p b .01.
P. Teovanović et al. / Intelligence 50 (2015) 75–86 83

Table 8 Table 10
Factor pattern matrix for all sixteen cognitive measures. Correlations between cognitive bias factors from Tables 5 and 9.

Variable Gc TD Gf Bias1 Bias2 Table 8 bias factors

Anchoring effect −.07 .24 .06 −.08 .44 Table 5 bias factors Bias1 Bias2
Belief bias −.07 .06 −.10 .31 .09
Normative irrationality .55 .38
Overconfidence bias .03 .10 −.38 −.02 −.10
Ecological rationality −.67 −.07
Hindsight bias −.05 −.01 .27 .12 .37
Base rate neglect −.09 −.07 −.16 −.01 .36 Note. Factor scores are estimated by Anderson–Rubin's method. Significant
Sunk cost effect .06 −.10 −.04 .06 .40 correlations (p b .01) are in bold font.
Outcome bias .02 .02 .02 .54 −.06
Raven's Matrices .13 .13 .64 −.11 −.07
Swaps Test .18 .03 .49 −.09 −.10 procedure for the measurement of susceptibility to anchoring
3D Space Test −.01 .15 .54 −.16 −.18 effect was developed and a more rigorous scoring procedure for
Vocabulary Test .49 .10 .12 .08 −.02
Synonyms–Antonyms .72 −.02 .00 .09 −.08
belief bias was proposed. Within-subject designs were applied
Test of Analogies .85 .01 −.10 −.30 .18 when rational behavior was defined with respect to the
Cognitive Reflection −.03 .15 .22 −.21 −.26 consistency criterion (anchoring effect, hindsight bias, outcome
Need for Cognition .05 .62 −.03 .02 −.03 bias). In addition to an increased power, the repeated
Openness/Intellect .05 .95 −.03 .09 .07
measurement allowed for the collection of cognitive bias
Note. Extraction method: maximum likelihood. Rotation method: direct oblimin. measures for each participant. Third, the study encompasses a
Loadings above .30 are in bold font. TD — thinking dispositions. wide set of potential correlates of cognitive biases and includes
multiple measures of the well-known psychometric constructs,
It should be noted that two cognitive bias factors of Table 8 such as fluid and crystallized intelligence.
do not replicate those extracted in the factor analysis presented This research had three aims. The first was to assess both
in Table 5. Correlations between factor scores from the two experimental and differential reliability of cognitive bias
analyses are presented in Table 10. Bias1 factor correlates measures. The results presented in Table 2 confirm the
moderately with both factors in Table 5 and Bias2 factor experimental reliability of the examined phenomena: each
correlates only with the normative irrationality factor. These normatively irrelevant variable had a large effect on partici-
outcomes are due to low correlations among cognitive bias pants' responses. The exception was the feedback emitted in
measures themselves; they indicate poor robustness of the the hindsight procedure, which also significantly, but only
cognitive bias factors. moderately, influenced participants' memory of previously
Unlike bias factors presented above, Gf, Gc and TD factors of stated confidence. This is in accordance with results of the
Table 8 were strongly correlated (rs ≥ .90) with the available meta-analyses (Christensen-Szalanski & Willham,
corresponding factors extracted in the factor analysis presented 1991; Guilbault et al., 2004). All observed effects were in the
in Table 5, as indicated by the results displayed in Table 11. predicted direction, thus confirming that departures from
normative models of rational behavior were not random, but
4. Discussion systematic. As expected, normatively irrelevant variables did
not affect all participants in the same way. Although a majority
Research findings presented in this paper extend previous of participants showed bias on each task, some of them gave
work on individual differences in cognitive biases in several normative responses. Furthermore, the results presented in
ways. First, our study is based on a more comprehensive set of Table 3 indicate that reliable individual differences are
cognitive biases. Since the definitive number of cognitive biases present — satisfactory levels of internal consistency (αs N .70)
is not known (cf. Baron, 2008; Tversky & Kahneman, 1974), were registered for all of the cognitive bias measures, except for
random sampling from the domain was not possible. To hindsight bias. Even for the hindsight bias measure, the
enhance sample representativeness, a heterogeneous set of reliability was significantly higher than that reported in
phenomena that covered various categories in alternative previous studies (Pohl, 1999, according to Musch, 2003). In
taxonomies was selected. Second, multi-item instruments for sum, in comparison to similar previous attempts, instruments
collecting measures of individual differences in various cogni- devised for the purposes of this study show better psychometric
tive biases were devised, some of these were based on properties.
examples found in previous studies (Baron & Hershey, 1988; The second aim was to examine latent structures of the
De Bruin et al., 2007; Evans et al., 1983; Kahneman & Tversky, cognitive bias domain. Since different taxonomies suggest that
1973; Markovits & Nantel, 1989; Stanovich & West, 1998;
Thaler, 1980; Tversky & Kahneman, 1974). Also, a new
Table 11
Correlations between fluid intelligence (Gf), crystallized intelligence (Gc) and
thinking disposition (TD) factors from Tables 7 and 9.
Table 9
Factor correlation matrix. Table 8 non-bias factors

Factor Gc Openness Gf Bias1 Bias2 Table 7 non-bias factors Gf Gc TD

Gc – Gf .90 −.09 −.02


Openness .17 – Gc .10 .97 .08
Gf .18 .18 – TD .05 −.07 .99
Bias1 −.20 −.17 −.22 –
Note. Factor scores are estimated by Anderson–Rubin's method. Significant
Bias2 −.25 −.12 −.07 .25 –
correlations (p b .01) are in bold font.
84 P. Teovanović et al. / Intelligence 50 (2015) 75–86

the population of cognitive biases is considerably heteroge- (Stanovich & West, 2008, p. 687). Vocabulary and Synonyms–
neous, we postulated that a single latent factor of cognitive Antonyms tests were related only to base rate neglect, while
biases is unrealistic. Our results confirm this expectation. scores on the test of Analogies correlated significantly with
Correlations between individual cognitive biases, presented in belief bias, base rate neglect and outcome bias. Finally, our
Table 4, were of small magnitude, and they tend to be both decision to include CRT in the study was motivated by the
positive and negative. Although some previous studies claimed results of previous studies that evidenced its predictive validity
positive manifold among the cognitive bias measures (De Bruin of performance on various cognitive bias tasks (Albaity et al.,
et al., 2007; Parker & Fischhoff, 2005; Stanovich & West, 1998), 2014; Frederick, 2005; Hoppe & Kusterer, 2011; Oechssler et al.,
close inspection of the matrices of intercorrelations that were 2009), even after controlling for the variance of intelligence
reported in these studies reveals the absence of uniformly measures (Toplak et al., 2011, 2014). Results presented here
positive correlations. In the present study, the minimum indicated that CRT was significantly correlated with the sunk
criteria for factor analysis were met and two common factors cost effect, belief bias and base rate neglect, thus suggesting the
were extracted. The obtained loadings indicate that these role of ability to resist the first responses that come to our mind
factors cannot be distinguished on the basis of any known in their emergence (Frederick, 2005). However, CRT loaded
taxonomies of cognitive biases found in the literature (Arnott, moderately on the Gf factor in the preliminary analysis, but
2006; Baron, 2008; Carter et al., 2007; De Bruin et al., 2007; failed to load significantly on any factor of the five factors in the
Haselton et al., 2005; Pohl, 2004; Stanovich, 2009). However, extended analysis, most probably due to its low reliability.
our factors corresponded to the theoretical distinction between Future research should use the expanded seven-item version of
normative (Gilovich et al., 2002; Kahneman et al., 1982; this instrument that has recently been proposed by Toplak et al.
Tversky & Kahneman, 1974) and ecological rationality (2014) and that had been shown to have better metric
(Gigerenzer, 1996, 2004; Gigerenzer et al., 1991; Hertwig properties.
et al., 2003; Hoffrage et al., 2000). Although these latent factors The results of our study indicate that measures of cognitive
of the cognitive bias domain could be seen as empirically biases cannot fit within the existing theories of the organization
derived dimensions, much reliable variance of the cognitive of cognitive abilities and intelligence. It appears that intelli-
bias measures remains unexplained. It is likely that additional/ gence has little to do with the decision making processes
different factors would emerge with the development of new captured by the cognitive bias tasks sampled in the present
measures. In the extended factor analysis that included all study.
cognitive biases and the other measures collected in this study, Although there are cognitive processes – e.g., those related
two cognitive bias factors emerged apart from the relatively to sensory/perceptual functioning – that have similar low
independent factors of fluid and crystallized intelligence and correlation with higher-order processes of intelligence, these
thinking dispositions. However, the nature of the cognitive bias differ from cognitive biases in that they tend to define broad
factors was not the same as that obtained in the first factor factors, such as broad visualization or broad auditory function,
analysis. Thus, this study provides evidence against one-factor at a higher level (Stankov & Horn, 1980). The present findings
theory of rational behavior in the field of judgment and indicate that a broad cognitive bias factor is unlikely to emerge,
decision making. This is in line with the results reported in implying that measures of cognitive biases may be “domain-
previous studies (De Bruin et al., 2007; Klaczynski, 2001; Parker specific” with little explanatory power beyond the very narrow
& Fischhoff, 2005; Stanovich & West, 1998, 2000; Toplak et al., topic they cover. Again, there are examples of such tasks,
2011, 2014; Weaver & Stewart, 2012; West et al., 2008). notably within the domain of neuropsychological assessments,
The third aim was to investigate potential correlates of which are not included in typical batteries for the assessment of
cognitive biases. A mixed pattern of reported correlations intelligence. Indeed, there is already some evidence that
supports the notion of cognitive bias heterogeneity. Overall, the measures of rationality, including cognitive biases, correlate
results revealed that some cognitive bias measures were with measures of executive-functioning (Toplak et al., 2011),
related (negatively) to measures of intelligence and perfor- objective and subjective numeracy (Liberali et al., 2012),
mance on the Cognitive Reflection Test, but not to Need for actively open-minded thinking (Stanovich, 2009) and decision
Cognition measures. Openness/Intellect was correlated posi- making styles (De Bruin et al., 2007). We believe that these
tively with anchoring, which is in agreement with the notion links should be explored in the future.
that these phenomena have common features that concern Failure to obtain noteworthy correlations may be attributed
enhanced sensitivity to experienced information (McElroy & either to the true lack of relationship between the processes
Dowd, 2007). Performances on all three cognitive tests that tapped by the bias tasks or to measurement error. Since our
loaded highly on fluid intelligence factors were significantly bias measures show satisfactory reliabilities and, at least for
related to belief bias, overconfidence bias and base rate neglect some bias measures our findings agree with those reported by
measures, while some of them were also related to sunk cost other investigators, it is reasonable to conclude that measure-
effect and outcome bias. This result indicates that considerable ment errors did not play a major role in the present study.
cognitive capacity was required to carry out the computation of In conclusion, the results presented in this paper strongly
normatively correct responses to bias tasks, i.e., it points to the suggest that cognitive bias measures share only a small portion
role of algorithmic limitations in the emergence of these of variance with each other, and with the well-known
phenomena (Stanovich & West, 1998, 2008). Cognitive biases measures of fluid and crystallized intelligence. In other words,
were also relatively independent from crystallized intelligence, substantial parts of reliable variances of individual differences
which addresses individual differences in previously acquired in susceptibility to various cognitive biases are unique, and no
“rules, procedures and strategies that can be retrieved by the cognitive bias we have studied can be accounted for by the
analytical system and used to substitute for heuristic response” well-established psychometric constructs.
P. Teovanović et al. / Intelligence 50 (2015) 75–86 85

Acknowledgments Gigerenzer, G. (1996). On narrow norms and vague heuristics: A reply to


Kahneman and Tversky. Psychological Review, 103(3), 592–596.
Gigerenzer, G. (2004). Fast and frugal heuristics: The tools of bounded
We wish to express our sincere gratitude to anonymous rationality. In D.J. Koehler, & N. Harvey (Eds.), Blackwell handbook of
reviewers for their helpful comments on an earlier draft of this judgment and decision making (pp. 62–88). Oxford, UK: Blackwell.
Gigerenzer, G., Hoffrage, U., & Kleinbölting, H. (1991). Probabilistic mental
manuscript. We also wish to thank Kaja Damnjanović, Simon models: A Brunswikian theory of confidence. Psychological Review, 98(4),
Jackson, Johannes Keller, Aleksandar Kostić, Goran Opačić, 506–528.
Richard Roberts, Oliver Tošković, Oliver Wilhelm, and Iris Žeželj Gilovich, T., Griffin, D., & Kahneman, D. (Eds.). (2002). Heuristics and biases:
The psychology of intuitive judgment. Cambridge University Press.
for candid and constructive discussions. Finally, we are grateful Guilbault, R.L., Bryant, F.B., Brockway, J.H., & Posavac, E.J. (2004). A meta-
to Marija Čanković, Maja Jovanović, Nikola Majksner, Tatjana analysis of research on hindsight bias. Basic and Applied Social Psychology,
Mentus, Luka Mijatović, Borislav Radović, Filip Teovanović, 26(2–3), 103–117.
Hardt, O., & Pohl, R. (2003). Hindsight bias as a function of anchor distance and
Jovana Tripunović, and Guliver Vigorski for their help with data
anchor plausibility. Memory, 11(4–5), 379–394.
collection. Haselton, M.G., Nettle, D., & Andrews, P.W. (2005). The evolution of cognitive
bias. In D. Buss (Ed.), The handbook of evolutionary psychology
(pp. 724–746). John Wiley & Sons.
References Hertwig, R., Fanselow, C., & Hoffrage, U. (2003). Hindsight bias: How knowledge
and heuristics affect our reconstruction of the past. Memory, 11(4–5),
357–377.
Ackerman, P.L., Beier, M.E., & Bowen, K.R. (2002). What we really know about Hoffrage, U., Hertwig, R., & Gigerenzer, G. (2000). Hindsight bias: A by-product
our abilities and our knowledge. Personality and Individual Differences, of knowledge updating? Journal of Experimental Psychology: Learning,
33(4), 587–605. Memory, and Cognition, 26(3), 566–581.
Ackerman, P.L., & Heggestad, E.D. (1997). Intelligence, personality, and Hoppe, E.I., & Kusterer, D.J. (2011). Behavioral biases and cognitive reflection.
interests: Evidence for overlapping traits. Psychological Bulletin, 121(2), Economics Letters, 110(2), 97–100.
219–245. Horn, J.L. (1965). A rationale and test for number of factors in factor analysis.
Albaity, M., Rahman, M., & Shahidul, I. (2014). Cognitive Reflection Test and Psychometrica, 30, 179–185.
behavioral biases in Malaysia. Judgment and Decision Making, 9(2), 149–151. Kahneman, D. (2000). A psychological point of view: Violations of rational rules
Ariely, D. (2009). Predictably irrational: The hidden forces that shape our decisions as a diagnostic of mental processes. Behavioral and Brain Sciences, 23(5),
(2nd ed.). New York: Harper. 681–683.
Arkes, H.R., & Blumer, C. (1985). The psychology of sunk cost. Organizational Kahneman, D. (2011). Thinking, fast and slow. Farrar, Straus and Giroux.
Behavior and Human Decision Processes, 35(1), 124–140. Kahneman, D., & Frederick, S. (2005). A model of heuristic judgment. In K.J.
Arnott, D. (2006). Cognitive biases and decision support systems development: Holyoak, & R.G. Morrison (Eds.), The Cambridge handbook of thinking and
A design science approach. Information Systems Journal, 16(1), 55–78. reasoning (pp. 267–293). New York: Cambridge University Press.
Baddeley, A.D., & Hitch, G. (1974). Working memory. In G.A. Bower (Ed.), Kahneman, D., Slovic, P., & Tversky, A. (Eds.). (1982). Judgment under
The psychology of learning and motivation. New York: Academic Press. uncertainty: Heuristics and biases. New York: Cambridge University
Baron, J. (2008). Thinking and deciding (4th ed.). New York: Cambridge Press.
University Press. Kahneman, D., & Tversky, A. (1973). On the psychology of prediction.
Baron, J., & Hershey, J.C. (1988). Outcome bias in decision evaluation. Journal of Psychological Review, 80(4), 237–251.
Personality and Social Psychology, 54(4), 569–578. Klaczynski, P.A. (2001). Analytic and heuristic processing influences on adolescent
Brenner, L.A., Koehler, D.J., Liberman, V., & Tversky, A. (1996). Overconfidence in reasoning and decision‐making. Child Development, 72(3), 844–861.
probability and frequency judgments: A critical examination. Klaczynski, P.A., & Daniel, D.B. (2005). Individual differences in conditional
Organizational Behavior and Human Decision Processes, 65(3), 212–219. reasoning: A dual-process account. Thinking and Reasoning, 11(4),
Buja, A., & Eyuboglu, N. (1992). Remarks on parallel analysis. Multivariate 305–325.
Behavioral Research, 27, 509–540. Knežević, G., & Opačić, G. (2011). Vocabulary Test. Unpublished material.
Cacioppo, J.T., & Petty, R.E. (1982). The need for cognition. Journal of Personality Liberali, J.M., Reyna, V.F., Furlan, S., Stein, L.M., & Pardo, S.T. (2012). Individual
and Social Psychology, 42(1), 116–131. differences in numeracy and cognitive reflection, with implications for
Cacioppo, J.T., Petty, R.E., & Kao, C. (1984). The efficient assessment of need for biases and fallacies in probability judgment. Journal of Behavioral Decision
cognition. Journal of Personality Assessment, 48(3), 306–307. Making, 25(4), 361–381.
Carroll, J.B. (1993). Human cognitive abilities: A survey of factor-analytic studies. Markovits, H., & Nantel, G. (1989). The belief-bias effect in the production and
New York: Cambridge University Press. evaluation of logical conclusions. Memory and Cognition, 17(1), 11–17.
Carter, C.R., Kaufmann, L., & Michel, A. (2007). Behavioral supply management: McCrae, R.R., & Costa, P.T., Jr. (2004). A contemplated revision of the NEO Five-
A taxonomy of judgment and decision-making biases. International Journal Factor Inventory. Personality and Individual Differences, 36(3), 587–596.
of Physical Distribution and Logistics Management, 37(8), 631–669. McElroy, T., & Dowd, K. (2007). Susceptibility to anchoring effects: How
Christensen-Szalanski, J.J., & Willham, C.F. (1991). The hindsight bias: A meta- openness-to-experience influences responses to anchoring cues. Judgment
analysis. Organizational Behavior and Human Decision Processes, 48(1), and Decision Making, 2(1), 48–53.
147–168. Musch, J. (2003). Personality differences in hindsight bias. Memory, 11(4–5),
Cohen, J. (1992). A power primer. Psychological Bulletin, 112(1), 155–159. 473–489.
De Bruin, B.W., Parker, A.M., & Fischhoff, B. (2007). Individual differences in O'Connor, B.P. (2000). SPSS and SAS programs for determining the number of
adult decision-making competence. Journal of Personality and Social components using parallel analysis and Velicer's MAP test. Behavior
Psychology, 92(5), 938–956. Research Methods, Instrumentation, and Computers, 32, 396–402.
DeYoung, C.G. (2011). Intelligence and personality. In R.J. Sternberg, & S.B. Oechssler, J., Roider, A., & Schmitz, P.W. (2009). Cognitive abilities and
Kaufman (Eds.), The Cambridge handbook of intelligence (pp. 711–737). behavioral biases. Journal of Economic Behavior and Organization, 72(1),
New York: Cambridge University Press. 147–152.
Epley, N., & Gilovich, T. (2001). Putting adjustment back in the anchoring and Pallier, G., Wilkinson, R., Danthiir, V., Kleitman, S., Knezevic, G., Stankov, L., et al.
adjustment heuristic: Differential processing of self-generated and (2002). The role of individual differences in the accuracy of confidence
experimenter-provided anchors. Psychological Science, 12(5), 391–396. judgments. The Journal of General Psychology, 129(3), 257–299.
Evans, J.S.B., Barston, J.L., & Pollard, P. (1983). On the conflict between logic and Parker, A.M., & Fischhoff, B. (2005). Decision‐making competence: External
belief in syllogistic reasoning. Memory and Cognition, 11, 295–306. validation through an individual‐differences approach. Journal of Behavioral
Fischhoff, B. (1975). Hindsight is not equal to foresight: The effect of outcome Decision Making, 18(1), 1–27.
knowledge on judgment under uncertainty. Journal of Experimental Pohl, R. F. (1999). Hindsight bias: Robust, but not reliable. University of Giessen,
Psychology: Human Perception and Performance, 1(3), 288–299. Germany: Unpublished manuscript.
Frederick, S. (2005). Cognitive reflection and decision making. Journal of Pohl, R.F. (Ed.). (2004). Cognitive illusions: A handbook on fallacies and biases in
Economic Perspectives, 19(4), 25–42. thinking, judgement and memory. Psychology Press.
Furnham, A., & Boo, H.C. (2011). A literature review of the anchoring effect. Raven, J.C., Court, J.H., & Raven, J. (1979). Manual for Raven's Progressive Matrices
The Journal of Socio-Economics, 40(1), 35–42. and Vocabulary Scales. London: H. K. Lewis & Co.
Furnham, A., Boo, H.C., & McClelland, A. (2012). Individual differences and the Stankov, L. (1994). The complexity effect phenomenon is an epiphenomenon of
susceptibility to the influence of anchoring cues. Journal of Individual age-related fluid intelligence decline. Personality and Individual Differences,
Differences, 33(2), 89–93. 16(2), 265–288.
86 P. Teovanović et al. / Intelligence 50 (2015) 75–86

Stankov, L. (1999). Mining on the “no man's land” between intelligence Strack, F., & Mussweiler, T. (1997). Explaining the enigmatic anchoring effect:
and personality. In P.L. Ackerman, P.C. Kyllonen, & R.D. Roberts (Eds.), Mechanisms of selective accessibility. Journal of Personality and Social
Learning and individual differences: Process, trait, and content determinants Psychology, 73(3), 437–446.
(pp. 315–337). Washington, DC: American Psychological Association. Thaler, R. (1980). Toward a positive theory of consumer choice. Journal of
Stankov, L. (2000). Complexity, metacognition, and fluid intelligence. Economic Behavior and Organization, 1, 39–60.
Intelligence, 28(2), 121–143. Toplak, M.E., West, R.F., & Stanovich, K.E. (2011). The Cognitive Reflection Test
Stankov, L. (2013). Noncognitive predictors of intelligence and academic as a predictor of performance on heuristics-and-biases tasks. Memory and
achievement: An important role of confidence. Personality and Individual Cognition, 39(7), 1275–1289.
Differences, 55(7), 727–732. Toplak, M.E., West, R.F., & Stanovich, K.E. (2014). Assessing miserly information
Stankov, L., & Horn, J.L. (1980). Human abilities revealed through auditory tests. processing: An expansion of the Cognitive Reflection Test. Thinking and
Journal of Educational Psychology, 72(1), 19–42. Reasoning, 20(2), 147–168.
Stanovich, K. (2003). The fundamental computational biases of human Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics
cognition: Heuristics that (sometimes) impair decision making and and biases. Science, 185, 1124–1131.
problem solving. In J. Davidson, & R. Sternberg (Eds.), The psychology of Weaver, E.A., & Stewart, T.R. (2012). Dimensions of judgment: Factor analysis of
problem solving (pp. 291–342). New York: Cambridge University Press. individual differences. Journal of Behavioral Decision Making, 25(4),
Stanovich, K.E. (2009). Distinguishing the reflective, algorithmic, and autonomous 402–413.
minds: Is it time for a tri-process theory. In K. Frankish, & J. St. B.T. Evans Wegener, D.T., Petty, R.E., Detweiler-Bedell, B.T., & Jarvis, W.B.G. (2001).
(Eds.), In two minds: Dual processes and beyond (pp. 55–88). Oxford, England: Implications of attitude change theories for numerical anchoring: Anchor
Oxford University Press. plausibility and the limits of anchor effectiveness. Journal of Experimental
Stanovich, K.E. (2012). On the distinction between rationality and intelligence: Social Psychology, 37(1), 62–69.
Implications for understanding individual differences in reasoning. In K.J. West, R.F., Toplak, M.E., & Stanovich, K.E. (2008). Heuristics and biases
Holyoak, & R.G. Morrison (Eds.), The Oxford handbook of thinking and as measures of critical thinking: Associations with cognitive ability and
reasoning (pp. 343–365). Oxford, England: Oxford University Press. thinking dispositions. Journal of Educational Psychology, 100(4),
Stanovich, K.E., & West, R.F. (1998). Individual differences in rational thought. 930–941.
Journal of Experimental Psychology: General, 127(2), 161–188. Wolf, B., Momirović, K., & Džamonja, Z. (1992). Intelligence Test Battery — KOG 3.
Stanovich, K.E., & West, R.F. (2000). Individual differences in reasoning: Belgrade: Center for Applied Psychology.
Implications for the rationality debate? Behavioral and Brain Sciences,
23(5), 701–717.
Stanovich, K.E., & West, R.F. (2008). On the relative independence of thinking
biases and cognitive ability. Journal of Personality and Social Psychology,
94(4), 672–695.

View publication stats

You might also like