You are on page 1of 41

CHAPTER TWO

What Intuitions Are. . . and Are Not


Valerie A. Thompson1
Department of Psychology, University of Saskatchewan, Saskatoon, Saskatchewan, Canada
1
Corresponding author: e-mail address: valerie.thompson@usask.ca

Contents
1. Introduction 36
2. Intuitions as Type 1 Judgments 40
2.1 The Architecture of Dual-Process Theories 40
2.2 Intuitions as the Autonomous Set of Systems 44
2.3 Intuitions as Natural Assessments 46
2.4 Summary 47
3. Intuitions as Memories 47
3.1 Intuitions as Implicit and Associative Learning 47
3.2 Intuition as Skilled Memory 51
3.3 Intuition as Recognition Memory 53
3.4 Intuitions as Gist 54
3.5 Summary 56
4. Intuitions as Metacognition 56
4.1 Intuitions of Metamemory 56
4.2 Intuitions and the Feeling of Rightness 59
4.3 Intuitions of Coherence 60
4.4 Summary 62
5. Intuitions as Feelings 62
6. Summary 63
6.1 Conclusions 67
Acknowledgment 67
References 67

Abstract
Intuitions are commonly defined in terms of their supposed characteristics, for example,
fast, implicit, parallel, and automatic. In this chapter, I argue that such an approach fails
to provide a sufficiently rigorous definition to be the basis for scientific inquiry. Instead,
I propose that intuitive thought is best understood in terms of the mechanisms that give
rise to it. Intuitions may arise from the operation of type 1 processes, as in dual-process
theories, they may arise from a number of different memory processes, such as associa-
tive learning, skilled memory, recognition memory, and gist memory. I also argue that
many metacognitive processes, specifically, the processes by which our cognitive pro-
cesses are monitored, are also a form of intuition. Emotional processes can form the

#
Psychology of Learning and Motivation, Volume 60 2014 Elsevier Inc. 35
ISSN 0079-7421 All rights reserved.
http://dx.doi.org/10.1016/B978-0-12-800090-8.00002-0
36 Valerie A. Thompson

basis of intuitive judgment and can also motivate behaviors and decisions. Although
these processes may give rise to judgments that may all be classified as “intuitive,”
the characteristics of the judgments that arise from them may differ. A second goal
of this chapter was to look for points of intersection between these views and to sug-
gest avenues for future research. One such avenue is to examine the role of coherence
in terms of both the information that gives rise to intuitive judgments and the processes
that monitor those judgments. The chapter concludes with a discussion of the relative
value of intuitive and deliberate thinking.

1. INTRODUCTION
What do we mean when we say that we decided “intuitively”? One
might wonder how the answer to that question could justify an entire chap-
ter, because the answer is, well, intuitive. Many, including trained scientists,
use the term with an “intuitive understanding” (Hogarth, 2001, p. 6). It
takes only a quick read through published articles to see this: whereas other
terms are defined and operationalized with care, the term “intuition” is often
used without rigorous definition and often without justification. For exam-
ple, the phrase “intuitive beliefs” (De Neys, Rossi, & Houdé, 2013) does not
strike one as odd, even though there may be no evidence provided to sub-
stantiate the status of said beliefs as intuitive in a particular context. Such
usage is common, and I, myself, have been guilty of using the term without
giving much thought to its scientific status. The point is that, as scientists, we
write, investigate, and theorize about intuition and intuitive processes rely-
ing on either common sense definitions or scientifically vacuous ones. Intu-
ition, like many other abstract constructs, is difficult to define and even more
difficult to operationalize in a way that can be studied scientifically.
As an example, a review of recent collections on intuitive reasoning rev-
ealed little consensus on the definition of intuition ( Janoff-Bulman, 2010,
special issue in Psychological Inquiry and recently edited volumes, e.g.,
Plessner, Betsch, & Betsch, 2008). The definitions tend to cohere around
a family resemblance: intuitions are fast, involve knowing without knowing,
are automatic, require little effort, no conscious deliberation, and so on
(Tables 1.1 and 1.2 in Evans, 2010a, summarize many of the relevant attri-
butes). Family resemblances, however, are not defining features, so that the
overlap in phenomena encompassed by two particular definitions might be
minimal. Family resemblances also do not offer sufficient rigor to permit sci-
entific testing. That is, one needs to have a set of criteria that sufficiently are
What Intuitions Are. . . and Are Not 37

well defined to provide an operational definition in order to be able to point


at a phenomenon and say, with confidence, “that is or is not an intuitive
decision.” Otherwise, we run the risk of classifying decisions as “intuitive”
because we have no other explanation for them; the alternative would be to
abandon the scientific inquiry into intuitive decision-making and declare it
to be a “useless concept” (Hammond, 2010).
This latter course is unlikely, however, as research into intuitive
decision-making is well entrenched, and the concept, however, ill-defined,
is part of both the mainstream and scientific literatures: a search of the Psy-
cINFO database turned up close to 5000 articles with “intuition” in their
abstract and well over 600 with “intuition” in their title. The value of intu-
itive thinking has been debated in both the popular and scientific literature,
with popular books such as Gladwell (2005) extolling the virtues of think-
ing intuitively and others such as The Invisible Gorilla: And other Ways that
our Intuitions Deceive Us (Chabris & Simons, 2010) arguing the opposite.
The goal of this chapter is to provide a critical overview of “intuition”
as it is commonly used in the reasoning and decision-making literature
and to provide some clarity on how we might usefully proceed. Before
beginning, it would be helpful to consider the following situations and
try to determine whether or not each represents an example of intuitive
thought:
(1) (In conversation) “I just remembered that tomorrow is my mother’s
birthday.”
(2) You enter your house and recognize the chair in the living room.
(3) While proofreading an essay, Emma spots an awkwardly written sen-
tence, which takes her several minutes to rewrite.
(4) After several hours of trying to figure out how to do a mathematical
proof, John realizes that the problem is similar to one he solved earlier
and successfully applies the same approach.
(5) Within a few minutes of meeting someone, you come to the conclu-
sion that she is a rather cold and unfriendly person.
(6) Mary is trying to figure out which university to attend. Having nar-
rowed her choices to three, she lists the pro’s and con’s of each. She
then decides on her home university because she does not want to
move away from her family.
(7) Walking along the pavement, you spot a pair of shoes in the window
that you love and proceed to buy.
(8) Within seconds of looking at a spreadsheet, Jane, an experienced
accountant, realizes there is an error.
38 Valerie A. Thompson

(9) On his way home from work, Rick turns right on Water Street, which
is the same route that he normally travels.
(10) A man convicted of raping a wholesome 17-year-old high school stu-
dent is judged more harshly than the man convicted of raping a street
prostitute.
These examples illustrate why several of the common approaches to defining
intuition are problematic. A common approach is to try and define intuition
in terms of what it is not, and the most common contrast category is con-
scious, deliberate reasoning (e.g., Epstein, 2010; Evans, 2010a; Hogarth,
2001). This is problematic in a number of ways, not least of which that
one immediately encounters the problem of defining consciousness (e.g.,
Price & Norman, 2008). Moreover, as these examples illustrate, many cog-
nitive processes are a blend of conscious and unconscious thinking. For
example, John has an apparent insight to a problem after a period of delib-
erate thinking (Example 4). Mary appears to have adopted a deliberate
approach to choosing her university (Example 6), but the criteria by which
the universities were evaluated and the decision to remain close to family
were undoubtedly influenced by emotional (nondeliberate) factors. Another
problem, later outlined in the section on metacognition, is that deliberate
thinking can sometimes be initiated autonomously, as in response to an
implicit monitoring judgment, as in Examples 3 and 8 in the text earlier.
A crucial difference between the two types of thought is that people can
control the direction of their conscious deliberation, but probably not their
intuition (Thompson, 2013). A third, related problem is that intuitions are
not unconscious—we are consciously aware of the output of intuition
(Evans, 2010a; Kahneman, 2003); it is the processes by which the judgment
is produced that are thought to be unconscious and, sometimes, autonomous
(Evans, 2010a). Finally, defining intuition as an unconscious processing, as
“knowing without knowing how,” does little to separate it from all the rest
of cognitive processes, which are executed subconsciously and whose oper-
ation we have no insight to. For example, perception, categorization, and
memory retrievals would all fit the definition of intuition under this defini-
tion, yet most would not classify Examples 1 and 2 in the preceding text as
intuitions. Nonetheless, the observation that intuitions are part of the vast
cognitive underground (Kahneman, 2011) may be one of the most impor-
tant insights into the nature of intuition, as it then invites us to apply what we
know about these other processes to the study of intuition.
A related strategy is to define intuitions in contrast to the other three “i”
words, impulses, instincts, and incubation (Epstein, 2010), although not all
What Intuitions Are. . . and Are Not 39

authors would necessarily agree with this exclusion. The advantage of such
an approach is to limit the scope of intuition to the domain of learning and
experience (Betsch, 2008), which does reduce the complexity of the prob-
lem. However, this eliminates many emotional responses (see Examples 7
and 10), which, as described in the succeeding text, many theorists would
include as a source of intuitive judgment. The status of incubation is even
more controversial. Topolinski and Reber (2010), for example, argued that
it is a form of intuition. Nonetheless, given that the processes involved in
incubation (such as Example 4 earlier) extend over a time frame that is quite
different from the fast processes involved in most other intuitive judgments,
they will not be included in the discussion here (see Hélie & Sun, 2010, for a
model of implicit processes in problem solving).
Another common approach that I will not adopt here is to attempt to
define intuition in terms of the quality of either their outcomes or processes
(see Elqayam & Evans, 2011, for a thorough discussion of the problems aris-
ing from this approach). The approach of defining intuitions in terms of the
quality of their outcomes (e.g., as biases) is part of a long tradition that sought
to explain a number of otherwise seemingly irrational judgments using heu-
ristic processes (Kahneman & Tversky, 1973; Tversky & Kahneman, 1973).
At some point in that history, “intuitions” became mistakenly mapped with
“bad decisions,” and a competing tradition began to demonstrate the sound-
ness of heuristic judgment (e.g., Gigerenzer, Todd, & The ABC Research
Group, 1999). The evidence, however, is that intuitions, like all other men-
tal processes, are accurate to the extent that the cues that elicit them are an
appropriate and well-calibrated basis for the judgment at hand.
Another approach that I will eschew is to equate intuitions with heuris-
tics (Betsch, 2008; Evans, 2010b). The problem here is that many heuristics
involve deliberate reasoning. Their primary characteristic is as a means to
turn a difficult problem into a more tractable one (see Shah &
Oppenheimer, 2008, for an excellent discussion), a process that might
involve processes that we are prepared to accept as intuitive and others that
we would not (Betsch, 2008). An example of this is Mary’s approach to
selecting a university (Example 6), which involved several heuristic strate-
gies, such as narrowing the choice to three and summarizing the good and
poor qualities of each, both of which were implemented deliberately, but
whose outcomes were undoubtedly shaped by implicit processes.
Glöckner and Witteman (2010, p. 5) aptly summarized the situation
thusly: “Controversy about what intuition is starts with its definition and
further concerns it properties, the scope and the homogeneity of the
40 Valerie A. Thompson

phenomenon, its working mechanism, it distinctness from deliberation, its


relatedness to affect, and its dependence on experience.” The problem, they
argue, is that intuition is a label that is given to a variety of automatic processes,
so that the qualities that are associated with intuitive judgments will vary
according to the type of processes that give rise to it. Consequently, they
advocate studying intuitions in terms of the processes that give rise to them.
Earlier, Hogarth (2001, p. 194) made a similar claim that “In order to under-
stand and improve intuitions, you must understand the process by which they
were acquired.” That is the approach that guides the organization of this chap-
ter. In the succeeding text, I describe several prominent models of intuitions,
focusing on the processes posited to underlie intuitive judgment. The goal of
each section is to describe how intuitions arise in each model, provide a small
sample of the evidence that favors the model, describe the properties or char-
acteristics that are ascribed to the intuitive judgments that arise from the pro-
posed mechanisms, and, where possible, offer suggestions for how to verify
that the judgments in question have that property.

2. INTUITIONS AS TYPE 1 JUDGMENTS


2.1. The Architecture of Dual-Process Theories
I begin here with the broadest definition of intuitive processes before pro-
ceeding to narrower conceptualizations in later sections. Dual-process the-
ories of reasoning and decision-making (e.g., Evans & Stanovich, 2013;
Kahneman, 2011; Stanovich, 2011) posit two qualitatively different types
of processes, which are variously labeled “heuristic and analytic” (Evans,
2006), “System 1 and System 2” (Stanovich, 1999), “associative and rule-
based” (Sloman, 1996), “old mind versus new mind” (Epstein, 2010),
and so on. For the purposes of this chapter, I will label them type 1 and
2 processes (per Evans & Stanovich, 2013) as an acknowledgment that these
are umbrella terms that subsume many different cognitive and neural systems
and processes. Type 1 processes have been variously characterized as fast,
automatic, implicit, parallel, and low capacity, and type 2 processes as
slower, rule-based, serial, deliberate, and capacity-dependent (see
Evans & Stanovich, 2013, for a review). On this view, type 1 processes give
rise to intuitions (Evans, 2010a; Kahneman, 2003).
Although it has proven difficult to narrow down the critical features
of intuitive or type 1 processes (see Evans & Stanovich, 2013, for a
What Intuitions Are. . . and Are Not 41

potential solution), dual-process theories have remained popular because


they propose architectures that specify how intuitive and deliberate modes
of thinking interact and that explain both our susceptibility to reasoning
biases and our ability to overcome them. The two most common are the
default interventionist architecture and a parallel processing model
(Evans, 2007a, compares and contrasts these views). The default interven-
tionist view (Evans, 2010a, 2010b; Kahneman, 2003; Stanovich, 2011)
posits sequential processing, such that type 1 processes provide an initial
answer to a problem, which may or may not be overturned by type 2 anal-
ysis. The parallel processing view (De Neys & Glumicic, 2008; Sloman,
1996) posits that both processes are initiated in sequence, but because of their
speed, type 1 processes are likely to complete first and to provide the answer.
In truth, these models also allow for a broader conceptualization of type 2
thinking that includes functions such as planning, deliberation, hypothetical
thinking, metarepresentation, and counterfactual thinking (Evans, 2007b,
2010b); however, it is their role in overcoming intuitive judgment
that has been the primary source of debate. Also, although most theorists
now acknowledge that both erroneous and successful reasoning can arise
from type 1 and 2 processing (Evans & Stanovich, 2013; Pennycook &
Thompson, 2012; Stanovich, 2011), the origins of this work were to
explain how rational, educated people often fail to succeed in situations that
require the application of elementary rules of probability or logic, such as the
following:
(11) If something is a flower, then it has a gebber.
If something has a gebber, then it is a rose.
Something is a flower. Therefore, it is a rose.
Thompson, Prowse-Turner, and Pennycook (2011)

(12) In a study 1000 people were tested. Among the participants there were 5
engineers and 995 lawyers. Jack is a randomly chosen participant of this
study. Jack is 36 years old. He is not married and is somewhat introverted.
He likes to spend his free time reading science fiction and writing computer
programs.
What is the probability that Jack is an engineer? ___
De Neys and Glumicic (2008)

(13) If it takes 5 machines 5 minutes to make 5 widgets, how long would it take
100 machines to make 100 widgets?
___ minutes
Frederick (2005)
42 Valerie A. Thompson

(14) Linda is 31 years old, single, outspoken and very bright. She majored in
philosophy. As a student, she was deeply concerned with issues of dis-
crimination and social justice, and also participated in anti-nuclear
demonstrations. What is the probability that:
Linda is a bank teller___
Linda is a bankteller who is active in the feminist movement___
Adapted from Tversky and Kahneman (1983)

In all cases, there is an easily computable answer based on the principles


of probability or logic, but most participants give an answer consistent with
an alternative criterion, such as belief (Example 11), representativeness
(Examples 12 and 14; Kahneman & Tversky, 1973; Tversky &
Kahneman, 1983), or pattern recognition (Example 13). On a dual-process
account, these answers are generated by type 1 processes, and type 2 pro-
cesses have not intervened to override that response in favor of one based
on logic, probability, or math. In support of the assumption that type 2 pro-
cesses require cognitive capacity to execute, those of higher capacity tend to
be the ones to give answers based on probability or logic (Stanovich, 1999,
2011). Similarly, limiting the opportunity for type 2 thinking by time
restrictions (Evans & Curtis-Holmes, 2005; Finucane, Alhakami,
Slovic, & Johnson, 2000) or with a dual task (De Neys, 2006) also reduces
the probability of a logic- or probability-based response.
Clearly, however, there are additional factors that predict the probability
of type 2 engagement, as there are many tasks where a relationship between
override and capacity is not observed (Stanovich & West, 2008). It is also the
case that people who otherwise would not do so can be induced to engage an
override by use of instructions (Daniel & Klaczynski, 2006; Evans,
Newstead, Allen, & Pollard, 1994; Newstead, Pollard, Evans, & Allen,
1992; Vadeboncoeur & Markovits, 1999) and by manipulations that encour-
age reasoning from a different perspective (Beatty & Thompson, 2012;
Markovits et al., 1996; Thompson, Evans, & Handley, 2005). These data
suggest that, in some cases, override failures are due to failures of motivation
(Stanovich, 2011) or monitoring (Kahneman, 2003; Thompson et al., in
press), rather than lack of capacity.
As I see it, there are several challenges for this framework going forward.
The first is to clarify which of the many characteristics attributed to type 1
processing is the most important in a given context and to then demonstrate
that the answers ascribed to type 1 processes have this characteristic. For
example, although speed of processing may be a correlated, rather than a
defining feature of type 1 processing (see next section), many of the
What Intuitions Are. . . and Are Not 43

explanations for the interaction between type 1 and 2 processes depend on


speed. Thus, at a minimum, one needs to demonstrate that type 1 process in
question takes less time than its type 2 counterpart; similarly, it should be
possible to respond on the basis of type 1 outputs even under heavy task
demands. Neither assumption has been adequately tested, and attempts to
do so have not always provided clear-cut answers.
Take, for example, the case of belief bias. This refers to a tendency to
draw conclusions on the basis of belief, rather than validity (Evans,
Barston, & Pollard, 1983), and is one of the most widely replicated phenom-
ena in the reasoning literature (see Example 11 in the preceding text).
Responding on the basis of belief is also one of the textbook cases of an out-
come that is attributed to type 1 processes, as the phrase “intuitive beliefs”
suggested. Handley, Newstead, and Trippas (2011) tested this explanation
by asking reasoners to solve simple conditional reasoning problems (i.e., if
p, then q. p therefore q?) where the believability of the conclusion varied
orthogonally to the validity of the answer. The novel aspect of this study
was that participants were instructed to base their answers on belief for half
the trials and on logic for the other half. On the assumption that belief-based
judgments are faster than logic-based judgments, one would expect that
beliefs should interfere with judgments of logic, but not vice versa. In con-
trast, Handley et al. found that when asked to make judgments based on
belief, the validity of the conclusion interfered with those judgments just
as much as the reverse case. Pennycook, Trippas, Handley, and
Thompson (2013) had replicated that phenomena using a base-rate task,
similar to Example 12 in the text earlier. Finally, Pennycook and
Thompson (2012) found that problems that included information about
the base rates were processed just as quickly as problems where they were
omitted, despite evidence that the base-rate information was incorporated
into judgments. These data are clearly not consistent with the assumption
that judgments based on belief or representativeness are faster than judg-
ments based on logic or probability. A limitation to all of these studies
was that the logical/probabilistic answers were very simple and possibly gen-
erated by type 1 processes themselves. Nonetheless, these studies demon-
strate the need to be clear about the properties ascribed to type 1
processes and to examine those assumptions empirically.
A second challenge is to define more precisely the type 1 processes that
are thought to contribute responses to a given task. As explained in the pre-
ceding text, “type 1” is an umbrella that encompasses a range of different
processes, ranging from affective responses, belief-based responses, responses
44 Valerie A. Thompson

based on stereotypes, linguistic processes (such as relevance), and so on.


Moreover, researchers often make a number of unverified assumptions
about how these processes operate. Thus, I think the processes in question
need to be better specified in the context of a particular task, with attention
given to the cues that trigger the response, variability among participants in
the propensity for that response to be given, the speed and compellingness
with which it is made, and so on.
Finally, a major challenge for dual-process theories is to explain, why,
with all other variables being equal, for a given reasoners on some
problems, a type 1 response is satisfactory and on others, it is not
(Thompson, 2009). Thompson and colleagues (Thompson & Morsanyi,
2012; Thompson et al., 2011, 2013) had proposed that one way to answer
this question is to think about this as a metacognitive question, that is, one
of monitoring the quality of type 1 outputs and the control of type 2
behavior (see succeeding text). There are, however, going to be a number
of different answers to such a complex question and there is relatively little
research on the topic.

2.2. Intuitions as the Autonomous Set of Systems


Stanovich (2004) and also Evans and Stanovich (2013) had settled on auton-
omy as the central characteristic of type 1 processes, although Stanovich does
not explicitly refer to their outputs as intuitive. Nonetheless, the manner in
which these processes are thought to operate would fit many definitions of
intuition. Autonomous processes are executed whenever their triggering
conditions are present. Intuitions as the autonomous set of systems
(TASS) include both domain-specific processes (e.g., language comprehen-
sion and perceptual processes) and more domain-general processes (e.g.,
associative learning, implicit learning, and skill acquisition). These processes
are not flexible, in that they are executed only in response to their triggering
stimuli; nonetheless, they are fast, are “astoundingly efficient” (Stanovich,
2004, p. 39), and can operate in parallel. Note that these properties are cor-
relates of autonomy and are not central to the definition of type 1 processing.
Analytic thinking, in this view, monitors outputs from TASS and (some-
times) intervenes when TASS produce an outcome that is contrary to the
goals of the reasoner.
The assumption that type 1 processes are autonomous has important
implications for decision-making, because these outputs will always form
part of the representation of the problem (Stanovich, 2004; Thompson,
What Intuitions Are. . . and Are Not 45

2013). At one extreme, as described in the text earlier, type 1 output may
form the basis of the judgment or decision. Even if additional analysis is
engaged, the fact that one has an answer in mind may make it difficult to
think of alternatives or to act on them appropriately. Stanovich (2004,
p. 32) gave the example of rape victims, whose husbands and partners know,
on one level, that they need to be supportive, but who cannot suppress the
thoughts that their loved ones have been “defiled” or are “no longer theirs,”
even though they acknowledge such thoughts as inappropriate.
The criteria of autonomy also gives rise to a number of testable hypoth-
eses, most of which have not yet been tested. The most obvious is that the
outputs of autonomous processes should influence judgments based on
them, even if they are irrelevant to or contrary to the goals of the reasoner.
The multiple-instruction paradigm described in the preceding text (Handley
et al., 2011) offers one way to test this hypothesis: type 1 output should
always interfere with judgments based on the alternate criteria. Moreover,
type 1 output, because it is autonomous, should always be generated, even
under dual-task conditions (Hendricks, Conway, & Kellogg, 2013).
Although the processes that give rise to the autonomous judgments are
likely to be implicit, their output is explicit and so could be queried by verbal
reports. Finally, as in the text earlier, type 1 outputs may create a sense of
“simultaneous contradictory belief” (Sloman, 2002), so that even though
one knows that the probability of bank teller and feminist is lower than
the probability of being a feminist (Example 14 in the preceding text),
one may still have a lingering conviction that Linda should be a feminist.
These doubts may lower confidence in the judgment (De Neys,
Cromheeke, & Osman, 2011; Thompson et al., 2011), which means that
even though the judgment is made “correctly” on the basis of logic or prob-
ability, one may not act on it with the same confidence as one would
otherwise.
A central difficulty with the definition of type 1 processes as autonomous is
similar to the problems associated with defining intuitions as being uncon-
scious processes, that is, how do we differentiate the processes that produce
type 1 reasons and decisions from all the rest of our cognitive processes?
For example, how is giving the answer “100” or “Linda is a feminist bank
teller” different than recognizing a chair or remembering that tomorrow is
your mother’s birthday? Kahneman (2011, p. 247) suggested that we cannot:
“. . .the mystery of knowing without knowing is not a distinctive feature of
intuition; it is the norm of mental life.” Thus, it might well be that the processes
of recollection, perception, and reasoning all derive from similar processes,
46 Valerie A. Thompson

which, as noted in the preceding text, has the advantage of allowing us to trans-
fer our knowledge from those areas to our understanding of reasoning.

2.3. Intuitions as Natural Assessments


Kahneman (2003) proposed that people automatically form impressions
about the objects of thoughts and perceptions; these impressions are called
natural assessments. The formation of these assessments is not subject to vol-
untary control, nor are their origins discernable by introspection. They
become intuitions when they inform judgments whose contents are avail-
able to working memory. Physical properties, such as size, distance, and
loudness, can give rise to natural assessments, as can abstract properties such
as similarity, surprisingness, and affective valence.
Natural assessments influence decisions by a process of attribute substitu-
tion, by which an accessible property, such as similarity to a stereotype, is
substituted for a less accessible one (such as probability). As an example, in
the “Linda” problem earlier (Example 14), reasoners are asked to make a prob-
ability judgment, but instead, make a judgment about how similar the
description is to the “feminist” stereotype. Other examples include using
affective responses, such as outrage, as a index for jury awards (Kahneman,
Schkade, & Sunstein, 1998), evaluating risk based on the desirability of an out-
come (Finucane et al., 2000), basing judgments of morality on feelings of dis-
gust (Schnall, Haidt, Clore, & Jordan, 2008), or making probability judgments
based on the availability of instances (Tversky & Kahneman, 1973), among
others. The intuitions that arise from this process of substitution can be
corrected by type 2 processes, but the monitoring is assumed to be lax and
the judgments produced with confidence.
Although there are ways to establish that judgments are mediated by a
process of attribution, these proscriptions are seldom followed (Shah &
Oppenheimer, 2008). For example, in the classic version of the Linda
problem presented in the preceding text, respondents are asked to rank
the probability of eight outcomes describing her current employment and
activities, instead of just the two that were used in Example 14 (Tversky &
Kahneman, 1983). A second group was asked to rank how similar the descrip-
tion of Linda was to the eight outcomes. The important findings were that
both the similarity and probability groups ranked the probability of the
feminist þ bank teller option higher than the bank teller option and that
the correlation between the rankings in the similarity and probability groups
was 0.98. In other words, judgments based on similarity or representativeness
What Intuitions Are. . . and Are Not 47

perfectly predicted probability judgments, which is compelling evidence for


the hypothesis that the probability judgments were, indeed, based on
similarity.

2.4. Summary
Dual-process theories provide an architecture for the interaction between
intuitive (type 1) and deliberate (type 2) thinking. Because type 1 processes
are often executed more quickly than their type 2 counterparts, they form
the basis of an initial response, which may or may not be altered by subse-
quent deliberation. There is a lot of evidence supporting basic assumptions
regarding the role of WM and cognitive capacity in mediating type 1 and 2
thinking. However, many of the other assumptions, such as those regarding
the relative autonomy and speed of the two processes, have not been rigor-
ously tested, and the early evidence (e.g., Handley et al., 2011; Pennycook &
Thompson, 2012) suggests that the situation may be more complex that is
often assumed. Also, because of the breadth of processes that are subsumed
under the labels “type 1” and “type 2,” it is difficult to establish boundary
conditions for the two types of processes; a more productive approach might
be to define and verify the role of specific processes (e.g., representativeness)
on each task.

3. INTUITIONS AS MEMORIES
3.1. Intuitions as Implicit and Associative Learning
There are a number of theorists who have been working to demonstrate that
intuitions arise from basic learning and memory process that are largely
implicit. This research hails back to Reber’s seminal work (Reber, 1993),
demonstrating that participants appeared to be capable of learning complex
rule structures implicitly, although there has been considerable debate about
what, precisely, participants learn in those studies (Hendricks et al., 2013).
Moreover, one of the earliest instantiations of dual-process theories
(Sloman, 1996) defined type 1 processes as associations, a tradition that con-
tinues in modern dual-process theories (e.g., see Epstein, 2008; 2010); asso-
ciative memory processes would be considered by Stanovich to be part of
TASS. Recently, Tilmann Betsch and Andreas Glöckner have developed
a detailed model of the relationship between associative memory processes
and intuition, which is described in the succeeding text.
48 Valerie A. Thompson

They are clear (Betsch & Glöckner, 2010) to differentiate their view of
intuition from those that rely on heuristics, which, as described in the pre-
ceding text, reduce complex judgments to simpler ones, usually as a way to
deal with the constraints posed by the working memory. Their claim, in
contrast, is that “Intuition is capable of quickly processing multiple pieces
of information without noticeable cognitive effort. . . intuitive processes
are responsible for information integration and output formation (e.g., pref-
erence, choice). . .” (Betsch & Glöckner, 2010, p. 280). Here, the input to
intuition is information that is stored in long-term memory, primarily
acquired by associative learning (Betsch, 2008). The output of such pro-
cesses is a feeling that then serves as the basis for judgments and decisions.
Betsch and Glöckner have modeled these assumptions in a connectionist
network that uses a parallel constraint satisfaction (PCS) rule to generate
decisions/preferences. Its goal is to maximize coherence; thus, a incoherent
input is a bottleneck to judgment. For example, Glöckner and Betsch (2012)
showed that the time required to make a decision decreased with the coher-
ence of the input, even if it meant increasing the amount of information to
be processed. Participants in this study were asked to choose between two
products, about which they had information from four people who had
tested the products. The key manipulation was that one piece of information
could be removed that increased coherence (by removing information that
conflicted with the best alternative) or that decreased it (by removing infor-
mation that was consistent with it). As expected, decision times were longer
(and confidence lower) for the incoherent display relative to the control,
despite the fact that more information had to be processed. The reverse
was true for the displays that increased coherence.
A second key assumption of this model is that intuitive processing is par-
allel, fast, and does not require conscious attention. In an early set of studies
(Betsch, Plessner, Schwieren, & Gütig, 2001), participants were told to
watch a series of television ads, whose contents they were told to remember
for a subsequent test. They were told to do so while performing a dual task,
which was reading the prices of a large number of fictitious stocks that
scrolled along the bottom of the screen. After the memory test for the
ads, participants were unexpectedly asked to evaluate the stocks. The task
was designed to prevent participants from being able to make explicit eval-
uations of the stocks: the ads were salient and designed to focus attention on
themselves, participants were told to try and remember the ads and that read-
ing the stock prices was designed to disrupt that, and the number of ads
(between 20 and 40) and stock prices (up to 140) requiring processing
What Intuitions Are. . . and Are Not 49

was large. Nonetheless, participants best “liked” the stocks whose total share
price was highest, and the rank order between the sum of stock prices and
liking judgments was almost perfect. Interestingly, when asked to evaluate
the stocks explicitly (i.e., without distractions and under instructions to be
accurate), participants changed their evaluation criteria, preferring stocks
with a higher average performance, rather than those with the highest totals
(Betsch, Kaufmann, Lindow, Plessner, & Hoffmann, 2006). Betsch et al.
speculate that implicit learning is sensitive to the valence and the frequency
of stimuli, whereas explicit judgments are restricted by working memory
capacity and thus based on the recollection of a small number of instances.
In this view, intuitive and analytic processes perform different tasks
(Betsch & Glöckner, 2010). It is assumed that intuitive processes integrate
information and form preferences but that the input to integration may
be analytic, for example, by directing memory search, assessing conse-
quences and goals, and hypothetical thinking. The intuitive processes are
assumed to operate in the background, even if when one is engaged in delib-
erate judgment. Indeed, explicit attention may disrupt the optimal operation
of these processes if it focuses attention on misleading cue. Thus, these are
autonomous processes, in the manner that Stanovich (2004) conceived, that
is, they operate regardless of one’s intentions.
Note that in this view, intuition is often used to refer to both the pro-
cesses that underlie a decision (i.e., of processing the values of the stocks) and
the judgments or preferences that arise from those processes. Intuitions are
also often referred to as agents, who are capable of accomplishing things.
Despite this ambiguity, the main assumptions of the theory have been
instantiated in a connectionist model, which is an important step towards
disambiguating the theoretical constructs. It is also generative, in that pre-
dictions from the model can be used to generate new research. One clear
prediction is that cognitive capacity is not required to make the kinds of
intuitive judgments that the authors describe, so that they should be achiev-
able by children (Betsch & Glöckner, 2010) and should be invariant to the
cognitive ability of the reasoner. Moreover, the regularities that guide the
judgments (i.e., sensitivity to sums as opposed to averages) should show sim-
ilar insensitivities to cognitive capacity.
Another go-forward option might be to use this framework to investi-
gate other psychological processes that are claimed to process large amounts
of information in parallel and without conscious awareness. “Thin slices”
refer to situations where people make judgments about other people based
on a small (e.g., 10–15 s) sample of behavior (Ambady, 2010). For example,
50 Valerie A. Thompson

participants’ judgments of teacher effectiveness based on a 10 s silent clips of


university teachers correlated about 0.70 with end of semester ratings of
those teachers. The correlation was of similar magnitude if the clip was
observed under dual-task conditions. However, the correlation dropped
substantially if participants listed their reasons prior to making their judg-
ments, consistent with Betsch and Glöckner’s suggestion that attention
may sometimes be focused on suboptimal cues. Alternatively, Ambady
(2010) speculated that asking for deliberate reasoning may lead people to
overlook relevant but reliable affective cues.
These conclusions emphasize a notable point of contrast between this
view of intuition and that embodied in dual-process theories with regard
to the accuracy of intuitive processes. While dual-process theorists have
acknowledged that both intuition and type 2 thinking can produce both
good and bad decisions, these models are rooted in explaining how intui-
tions can produce reasoning biases. Here, in contrast, the emphasis is on
the accuracy of intuitions, which fits into an alternative tradition that
emphasizes the accuracy of intuitive judgment. For example, in a famous
study, Wilson and Schooler (1991) had two groups of participants taste jams:
One group listed reasons for their choice, while the other did not. Com-
pared to an expert panel, the group that listed reasons fared poorly. Thus,
an intuitive approach appeared to perform better: however, if the task is
altered slightly, the reverse is true. McMackin and Slovic (2000) asked
one group of participants a preference question (how much would people
like a set of advertisements) and another fact-related question, namely,
“What is the length of the Amazon River?” When asked to list reasons, par-
ticipants in the preference condition fared more poorly; however, in the fact
condition, the group who listed reasons performed better. Here, rational
analysis had a functional role to play in elaborating the problem space and
bringing in additional relevant information.
However, much caution should be exercised when making attributions
about the value of intuitive versus rational decisions. For example,
Dijksterhuis, Bos, Nordgren, and Van Baaren (2006) reported a startling
conclusion that people are better-off making decisions without deliberating
on them, particularly when the decisions are complex. Participants in these
studies were shown attributes for a number of options (e.g., apartments that
varied in terms of price, distance from work, and size). Afterward, partici-
pants were allowed to ponder their decision for a period of time or were
distracted using a secondary task, such as solving anagrams. Dijksterhuis
et al. reported that participants did better on complex decisions when they
What Intuitions Are. . . and Are Not 51

were distracted than when they were allowed to think about it. Although
this work is much cited, it has proved notoriously difficult to replicate
(Acker, 2008; Lassiter, Lindberg, González-Vallejo, Bellezza, & Phillips,
2009; Newell & Rakow, 2011; Newell, Wong, Cheung, & Rakow,
2009; Thorsteinson & Withrow, 2009). Until the mechanisms of intuitive
thought are better understood, along with precise predictions about how
type 2 and type 1 processes interact in a given task, we are not likely to
be able to give account of when one mode of thinking yields better out-
comes than the other.

3.2. Intuition as Skilled Memory


Just as much has been written about the benefits and drawbacks of intuitive
judgments, the literature also provides two very clearly different views of the
accuracy of expert intuition. Kahneman (2011) described one of the moti-
vating factors of his early work to be to debunk the myth of expert intuition.
In contrast, researchers like Klein (1999) have spent their careers docu-
menting how trained professionals, such as firefighters, can make accurate
judgments under pressure and with little introspection or thought.
Kahneman’s view stems from the well-documented failures of expert
judgments in some domains. As an example, having years of experience
in some clinical domains such as psychology and some areas of medicine does
not reduce diagnostic errors (Camerer & Johnson, 1991; see also Ericsson,
2007). Moreover, in many cases, performance is often suboptimal, that is,
less accurate than actuarial models. Experience can produce some improve-
ment in the calibration of confidence, but the tendency is still towards over-
confidence (Davis, 2009). In some areas, such as forecasting economic and
political events, experts do no better than lay people (Tetlock, 2005).
In other domains, however, there is a host of data to show that chess (and
other experts) routinely outperform novices, even when their judgments are
made under time pressure (Calderwood, Klein, & Crandall, 1988; Gobet &
Simon, 1996). This latter line of research stems from the Chase–Simon tra-
dition, who studied chess experts’ memories (Chase & Simon, 1973a) and
who showed that experts were better able to reproduce quickly exposed
(5 s) chessboards than novices, but only when those pieces were grouped
into meaningful games. Importantly, when the pieces were randomly orga-
nized, the two groups performed equally, so that it was not simply a matter of
the experts having better memory overall. Similarly, experts can look at a
board and select a good move quickly (see Charness, 1991, for review).
52 Valerie A. Thompson

The reason for both is that in the process of acquiring their skill, experts have
learned to recognize many thousands of patterns, and those patterns are
organized in memory as meaningful chunks. Thus, a large part of the skill
that underlies expert performance in many domains is well-developed rec-
ognition memory (Chase & Simon, 1973b).
Klein (1999) had documented how similar processes may play a role in
other expert decisions, such as those of nurses and firefighters. His natural-
istic studies of decision-making show instances where experts make rapid,
accurate decisions under time pressure. These decisions have the appearance
of intuitions and are proposed to be based on rapidly computed information
made available from long-term memory. Based on an extensive study with
fire commanders, Klein, Calderwood, and Clinton-Cirocco (1986) showed
that decisions (such as where to direct men and resources, when to call for
backup, when a house was about to collapse, which part of a fire was too
dangerous to approach, and where the origin of the fire was) were often
based on consideration of only a single hypothesis, usually the first that came
to mind, which was discarded in favor of an alternative only if a mental sim-
ulation of the consequences of that decision indicated it would not satisfy
and could not be modified to suit the current context.
Kahneman and Klein (2009) proposed that the key to skilled intuitions is
the availability of valid feedback and the opportunity to learn the relevant
decision cues. The learning need not be explicit, but the cues must be regular
and valid. They offer as an example the case in which a building is about to
collapse in a fire: there are very likely to be detectable, reliable cues that this
is about to happen. Cases where the environment is less regular or when
opportunities for feedback not there, like in many clinical settings, lead to
intuitions with low accuracy and poor calibration of confidence. For
example, physicians who treat patients in emergency wards often cannot
get feedback on their treatment choices because patients are routinely
moved to a regular ward, intensive care, or discharged to a family physician
(Hogarth, 2001).
The second condition that must be satisfied in order to give rise to skilled
intuitions is practice. Although such judgments are available quickly, the
skill that underlies them is acquired deliberately and requires targeted prac-
tice with feedback (rather than just experience) along with the possibility to
correct errors (Ericsson, 2009). Moreover, the amount of practice required is
extensive, on the order of about 10 years (Ericsson, Krampe, & Tesch-
Römer, 1993).
What Intuitions Are. . . and Are Not 53

Finally, although the preceding discussion has focused on the accuracy


of rapid, intuitive, decisions, there is evidence that these decisions can be
improved by additional analysis (Chabris & Hearst, 2003). As described in
the preceding text, fire commanders simulate the expected outcome of
their initial thought before acting and may modify or reject it. Similarly
Moxley, Ericsson, Charness, and Krampe (2012) examined the initial
moves (the first move they considered while thinking aloud) and the final
moves of skilled and less-skilled chess players. As suggested earlier, the ini-
tial moves of the expert players were better than the novices, as were their
final moves. Importantly, however, the final moves were better than the
initial ones, indicating that additional analysis improved on the initial intu-
ition. Indeed, for difficult problems, the expert decisions particularly
benefited from additional analysis. Thus, although experts may develop
better intuitions than novices, this in no way implies that analysis should
be abandoned in favor of intuition.

3.3. Intuition as Recognition Memory


The preceding section focused on the role that recognition memory plays in
skilled intuition. Research from another domain, although not explicitly
focused on intuitions, provides evidence that other forms of reasoning
may also rely on recognition memory. Specifically, Heit and Hayes
(2011) argued that a model designed to explains people’s ability to recognize
stimuli can also predict their judgments of the strength of simple inductive
arguments, insofar as both rely on judgments of similarity (Heit, Rotello, &
Hayes, 2012). Heit and Hayes (2011) showed participants pictures of large
dogs. Half of the participants studied the exemplars in preparation for a rec-
ognition test, while others learned that the dogs shared a novel property (i.e.,
have beta cells). At test, participants were shown a set of new and old items
and were asked to recognize them or to make inductive judgments (does this
animal have beta cells?). An item analysis indicated showed a strong relation-
ship between the probabilities that an item was judged “old” in the recog-
nition test and judged to have the inductive property. In a follow-up study
(Heit & Hayes, 2013), a similarly high correlation between recognition and
induction was found for stimuli that embodied more complex similarity
relations, although the inductive judgments relied on more complex simi-
larity relations than were necessary for recognition judgments. Although
these researchers did not ask participants to make intuitive judgments, their
54 Valerie A. Thompson

findings provide grounds for speculation that recognition memory may


inform intuitions in inductive reasoning also.

3.4. Intuitions as Gist


According to the fuzzy-trace theory, intuitions arise from gist memory traces
(see Reyna, 2012, for review). Gist and verbatim representations are formed
in parallel and represent different qualities of a stimulus. Gist represents the
essential meaning of the situation, whereas verbatim traces are shallow but
precise representations that are quickly forgotten. Precise calculations such as
adding numbers, computing probability, and logic require access to these
verbatim representations. However, it is assumed that people prefer to work
with gist-based or fuzzy representations and find working with verbatim
representations difficult. Intuitions rely on the gist representations and
thereby “produce meaning-based distortions in memory and reasoning”
(Reyna, 2012, p. 333).
In terms of reasoning and decision-making, judgments are assumed to be
guided by a hierarchy of preferences for various kinds of information
(Reyna, 2012). Gist representations come in a range of complexity, and
decisions are assumed to default to the simplest. If that fails to yield a deci-
sion, then more complex representations are used. For example, a verbatim
representation of “995 out of a 1000 people in a sample are lawyers and 5 are
engineers” would allow the inference that 99.5% of the sample are lawyers.
A gist representation would retain the information that the sample contains
some lawyers and some engineers (categorical and simple) or that there are
more lawyers than engineers (ordinal and more complex). When the verba-
tim and gist representations conflict, for example, when the numbers con-
flict with the meaning suggested by the gist representation, most people,
except those high in need for cognition, will opt for the gist representation;
these latter will inhibit the gist representation in favor of the verbatim one. It
is worth pointing out, at this juncture, that the evidence in favor of this last
assumption is slim, as measures of thinking dispositions account for a signif-
icant but relatively small amount of the variance in the tendency to give
answers based on probability or logic (Stanovich, 1999).
Early evidence in favor of the role of gist in intuitive judgments was pro-
vided by Reyna and Brainerd (1991). This experiment examined the classic
framing effect reported by Tversky and Kahneman (1981), in which partic-
ipants are told to imagine that the United States is preparing for an outbreak
of an Asian disease and that 600 people are expected to die. In the positive
What Intuitions Are. . . and Are Not 55

framing of the problem, reasoners are asked to choose between two pro-
grams to combat the outbreak: If the first is adopted, then 200 people will
be saved, whereas the second program has a 1/3 probability of saving 600
people and a 2/3 probability of saving no-one. The preferred choice is
the first. An identical but negative framing yields different choices: Here,
if the first program is adopted, 400 people will die, whereas the second pro-
gram has a 1/3 probability that no-one will die and a 2/3 probability that 600
people will die. The preference is now for the second option. Clearly, 200/
600 people living and 400/600 people dying are equivalent options, but they
are psychologically very different.
According to fuzzy-trace theory (Reyna & Brainerd, 1991), this occurs
because the gist representations of the two problems are very different. For
the positive frame, the gist represents a choice between “saving some” (the
first choice) and “saving some versus saving none” (the second choice). As
saving some is better than saving none, the first choice is preferred. For the
negative frame, the choice is between “many dying” and “many dying ver-
sus no-one dying.” Now, the latter is preferred. These framing effects are
intensified when the numbers are removed and substituted with vaguely
worded alternatives (e.g., 1/3 probability of saving many people and a 2/3
probability of saving no-one). As the alternatives became more vague,
so did the size of the framing effect, exactly what would be expected when
reliance on gist is increased.
Experts, who have much experience in a domain, also are thought to rely
on gist and to process information as “simply, qualitatively, and categorically
as possible given the constraints of the task” (Reyna & Lloyd, 2006, p. 180).
Relying on gist representations means that experts will make sharper distinc-
tions (i.e., using categorical gist representations) at risk boundaries. In this
study, physicians of varying skill levels were asked to assess the risk of hypo-
thetical patients developing unstable angina, for which two symptoms are
relevant: the probability of an imminent myocardial infarction or of clini-
cally significant heart disease. They were asked to make a number of judg-
ments about these patients, including the probability of either disjunct alone
(i.e., of having either symptom), the probability of the disjunction (i.e., of
having one or the other symptom or both), and the assessments of risk and
recommended courses of action (admit to ward, refer to cardiovascular care,
discharge, etc.). As expected, increasing expertise led to better risk assess-
ments. However, as Klein (1999) noted, most expert physicians relied on
less information, not more, than their less expert counterparts and made
sharper discriminations in terms of recommended actions, that is, they were
56 Valerie A. Thompson

more likely to recommend either admission to intensive care or discharge,


without the intermediate hedging (e.g., send to ward) that characterized less
expert physicians. Interestingly, they were just as likely to commit the dis-
junction fallacy, judging the probability of the disjunction to be less than at
least one of the disjuncts. All of these phenomena were attributed to relying
on simple, fuzzy gist representations.
Reyna (2012) was very careful to contrast “gist” representations to the
associative approach adopted, for example, by Glöckner and Betsch
(2010). The reason for this is that gist is essentially about meaning, whereas
connectionist networks represent “mindless stimulus–stimulus associations”
(p. 335). However, the patterns extracted by Glöckner and Betsch’s (2010)
model seem very close to gist meaning, representing, as they do, summary
information about the values of the stocks that participants were exposed to.
Conversely, one might ask how closely the gist traces that underlie expert
recognition are to the types of recognition memory that Chase and
Simon (1973a, 1973b) and Klein (1999) referred to. Certainly, there are
unique predictions that fall out of fuzzy-trace theory having to do with
the categorical nature of gist memory, but it is not clear how well they would
fit with other recognition models.

3.5. Summary
Memory-based theories of intuition emphasize the capacity and accuracy of
intuitive processing. They posit relatively well circumscribed and under-
stood mechanisms of memory as the basis for their theories, which adds pre-
cision to their predictions. However, with the exception of fuzzy-trace
theory, the relationship between the processes that give rise to intuitions
and deliberate cognitions are not well specified. These models would benefit
from architectural assumptions, such as those that characterize dual-process
theories, or other types of models (Hélie & Sun, 2010) that characterize the
interaction between implicit and explicit processes.

4. INTUITIONS AS METACOGNITION
4.1. Intuitions of Metamemory
Metacognition is often defined as “knowing about knowing,” that is, know-
ing about the contents of our cognitions. Metacognition also refers to the
monitoring and control of cognitive processes (Nelson & Narens, 1990).
Phrased in those terms, the concept of metacognition seems very far away
What Intuitions Are. . . and Are Not 57

from the concept of intuitions. However, there is compelling evidence that


many of our monitoring processes are implicit (see Koriat, 2007, for a
review) and are really another form of intuition. That is, we have intuitions
about whether or not our cognitive processes have worked well or not.
Much of this work has been carried out in the field of memory, and the intu-
itions in question refer to judgments about the accuracy of one’s memory, as
either an expression of confidence or a prospective judgment about one’s
future ability to remember or recognize an item from memory. Take, for
example, a situation in which you encounter a colleague in the hallways
about whose name you are uncertain. You can tell that you are uncertain,
but the cues that gave rise to that feeling are not likely accessible (e.g., Koriat,
Ackerman, Lockl, & Schneider, 2009; Koriat, Bjork, Sheffer, & Bar, 2004;
Koriat & Levy-Sadot, 2001; Schwarz, 2004, but see Brewer & Sampaio,
2006; Matvey, Dunlosky, & Guttentag, 2001).
The reason for this is that monitoring is thought to be an inferential pro-
cess (Koriat, 2007) that is based on the properties of retrieval processes rather
than in the properties of the item retrieved (e.g., Benjamin, Bjork, &
Schwartz, 1998; Busey, Tunnicliff, Loftus, & Loftus, 2000; Jacoby,
Kelley, & Dywan, 1989; Koriat, 1995, 1997; Koriat & Levy-Sadot, 1999;
Schwartz, Benjamin, & Bjork, 1997). For example, Reder and Ritter
showed that the familiarity of retrieval cues acts as a cue to feeling of know-
ing (FOK) judgments. Participants were shown difficult arithmetic prob-
lems, such as 17  23; these were repeated at varying frequencies during
the experiment as were the component parts (e.g., 17 could appear in several
other problems). After a period of practice and unbeknownst to the partic-
ipants, they were presented with new problems that were composed of parts
of previously viewed ones (e.g., 17 þ 23). Participants were asked to do two
things: (1) Make a judgment of whether they could retrieve the answer from
memory or would need to calculate it and (2) then execute the chosen strat-
egy. Importantly, they were asked to make their strategy selection in less
time than would be possible to retrieve the answers from memory. During
the practice trials, participants were better than chance at choosing the cor-
rect strategy; however, this relationship was not based on how well they
actually knew the problems, as performance on the new problems indicated.
On these problems, participants were increasingly more likely to judge that
they could retrieve a problem as a function of the frequency with which its
components had been presented in the earlier phase, to the point that they
would make errors such as saying they could retrieve the answers to prob-
lems they had never seen (i.e., 17 þ 23). Thus, participants were making
58 Valerie A. Thompson

quick judgments about the state of their knowledge, based on the cues they
had available, namely, how familiar they were with the retrieval cues. As
such, they bear a remarkable similarity to the other types of intuitive judg-
ments that we discussed in the text earlier.
Another cue that underlies metacognitive judgments is fluency or speed
with which an item comes to mind (e.g., Benjamin et al., 1998; Jacoby et al.,
1989; Kelley & Jacoby, 1993, 1996; Matvey et al., 2001; Whittlesea &
Leboe, 2003). The fluency of processing is such a strong cue to judgment
that fluently processed items give rise to the attribution that an item has been
previously experienced, even when it has not (e.g., Jacoby et al., 1989;
Whittlesea, Jacoby, & Girard, 1990) and even when fluency is misdiagnostic
of accuracy, such that fluently generated items are poorly recalled (Benjamin
et al., 1998).
Of course, both familiarity and fluency are normally reliable cues to
memory (e.g., Ackerman & Koriat, 2011). As is the case for the intuitive
judgments that were described earlier, the accuracy of the metacognitive
judgment will depend on the validity of the cues (Koriat, 2007), so that if
fluency and accuracy are positively related in the current task, metacognitive
judgments based on fluency will be accurate. When the cues are not valid,
such as the familiarity cues in Reder and Ritter’s (1992) study, then meta-
cognitive judgments will not be accurate. Thinking about metacognitive
judgments as intuitions raises several questions for new lines of investigation.
The first is whether these judgments fit Stanovich’s (2004) definition of
autonomy: that is, when the cues are present, do they always give rise to
a judgment, even though that judgment may be subsequently discounted
(Schwarz & Vaughn, 2002)? Also, can we think about reliance on cues such
as fluency and familiarity as a case of attribute substitution, namely, that a
judgment about memory strength is based, instead, on a judgment of famil-
iarity or fluency?
Framing the analysis in this way allows us to demystify yet another prop-
erty of intuitive judgments, namely, that they are often accompanied by a
strong sense of rightness (Hogarth, 2010; Sinclair, 2010; Topolinski &
Reber, 2010). Specifically, it leads to the conclusion that intuitions, like
all other processes of memory and perception, are monitored by processes
such as familiarity and fluency. Because the processes that give rise to intu-
itions tend to be fast and fluent and the experience of fluency engenders a
sense of confidence, intuitions are often confidently held (Thompson et al.,
in press; Topolinski & Reber, 2010).
What Intuitions Are. . . and Are Not 59

4.2. Intuitions and the Feeling of Rightness


Thompson and colleagues (Thompson, 2009; Thompson & Morsanyi,
2012; Thompson et al., 2011, 2013) had developed a framework in which
type 1 outputs are monitored in much the same way as memory retrievals.
On this view, intuitive, type 1 processes give rise to two outputs: The first is
the answer itself and the second is a feeling of rightness (FOR) about that
answer. The FOR is not constant across a set of problems, so that it is stron-
ger for some problems than for others.
In several studies, Thompson and colleagues had also demonstrated that
the FOR exerts an important control function over type 2 thinking. For this,
Thompson et al. (2011, 2013) developed a two-response paradigm in which
participants are asked to provide an initial, intuitive answer to a problem and
then rate their FOR about that answer. They are then given free time to
rethink their response. In a wide variety of reasoning tasks, variability in
the amount of time that participants spend rethinking their answers and
the probability that they change an initial answer varies with the FOR, such
that type 2 thinking is more likely in response to a low FOR than a high one
(Thompson et al., 2011, 2013).
Moreover, the FOR, like the other metamemory judgments that we
have discussed, is assumed to be an inference that is based on the experience
associated with generating the answer (Thompson & Morsanyi, 2012;
Thompson et al., 2011, 2013). For example, the fluency or speed with
which the initial answer is produced predicts FOR judgments, such that flu-
ent responding is associated with strong FORs, and factors that increase or
decrease answer fluency, such as the availability of a heuristic strategy,
increase or decrease FORs. Thus, the fluency of answering a reasoning
problem may form the basis of an attribution of rightness. This analysis pro-
vides a tentative reason for why the monitoring of type 1 outputs may be lax
and why type 2 processes are not always called to intervene: answers that are
fluently generated may create a strong FOR, which is the signal that addi-
tional or deeper analysis is not required.
The work on FOR judgments in reasoning is in its infancy, so there are a
great many open questions about the basis of FOR judgments and the rela-
tionship between FOR and type 2 thinking. For example, FORs, like other
metacognitive judgments, should be available quickly and mediated by vari-
ables that are known to influence those judgments. One hypothesis might be
that familiarity of the problem content informs FOR judgments in the same
way that familiarity informs FOKs (Reder & Ritter, 1992). In support of this
60 Valerie A. Thompson

hypothesis, Shynkaruk and Thompson (2006) found that reasoners


expressed lower levels of confidence in conclusions about unfamiliar con-
cepts than familiar ones, suggesting that familiarity may play a similar role
in FOR judgments. However, there were a number of other variables, such
as accuracy and speed of response, that were correlated with familiarity in
that study, so it is not possible to draw firm conclusions. FOKs are also
known to vary with the accessibility of associations that come to mind dur-
ing a retrieval attempt (Koriat, 1993). A similar process can be posited for
reasoning, in that FORs and confidence judgments may increase with the
amount of information that is retrieved in support of a conclusion.
Finally, the extent to which participants are able to introspect about the
origins of their FOR judgments is not known, although the fact that the cor-
relation between FOR and accuracy tends to be low (Shynkaruk &
Thompson, 2006; Thompson et al., 2011, 2013) suggests that reasoners
are unlikely to be aware of the origins of their FOR judgments. However,
the data from other metacognitive judgments do not provide a straightfor-
ward answer to the question. For example, there is evidence that people
have beliefs about the operation of their memories (e.g., that related pairs
of words will be easier to remember than unrelated ones, Mueller,
Tauber, & Dunlosky, 2012) and that they can recruit these beliefs into their
metacognitive judgments (Brewer & Sampaio, 2006; Matvey et al., 2001;
Mueller et al., 2012). Alternatively, people often need to be strongly cued
to apply explicit beliefs to metacognitive judgments, such that judgments of
learning are not sufficiently adjusted to accommodate the well-known effect
of time on forgetting (Koriat et al., 2004). Finally, participants may be unable
to describe how salient cues, such as the number of times an item was
rehearsed, relate to their metacognitive judgments, even when there is a
strong relationship between cue and judgment (Koriat et al., 2009). In
sum, the extent to which people’s metacognitive judgments reflect explicit
and implicit factors remains open.

4.3. Intuitions of Coherence


The studies that are described in this section suggest that intuitions of coher-
ence may play a role in metacognitive judgments similar to the role they
were proposed to play in preference judgments (Betsch & Glöckner,
2010). This work is based on a variation of Mednick and Mednick’s
(1967) Remote Associates Test. The paradigm involves presenting partici-
pants with word triads, some of which are coherent, in that all of the words
What Intuitions Are. . . and Are Not 61

are weakly associated with a fourth word (e.g., playing, credit, and report are
associated with card) and others that are incoherent trials in that there is no
remote associate (e.g., house, lion, and butter). Bolte and Goschke (2005)
observed that participants’ judgments of coherence were above chance, even
when they were unable to retrieve the answer and were required to make
their judgments in a very short period of time (i.e., 1.5 s after the presenta-
tion of the triad). They defined intuition to be a judgment based on memory
contents that have been activated (in this task by priming), but which have
not been consciously retrieved.
Topolinski (2011) viewed performance on this task as part of a wider
ability to detect incoherence and inconsistency in the world and argued that
intuitions of coherence arise from fluency of processing (again, produced by
priming) that results in a positive affective experience. As evidence for this
fluency-affect intuition model, it has been demonstrated that participants make
faster lexical decisions about coherent than incoherent triads (Topolinski &
Strack, 2009a) and are faster to read coherent than incoherent triads
(Topolinski & Strack, 2009b). Judgments of coherence increase when triads
are processed more fluently, as when they are primed (Topolinski & Strack,
2009c); in addition, coherent triads activate the facial muscles associated
with smiling (Topolinski, Likowski, Weyers, & Strack, 2009) and are
“liked” better than incoherent trials (Topolinski & Strack, 2009b).
Although not commonly referred to in such terms, judgments of coher-
ence are very similar to metacognitive judgments. Consider, for example,
the similarity between Reder and Ritter’s observations about the speed with
which participants can make FOK judgments and the speed with which par-
ticipants make coherence judgments. In both case, participants are making a
judgment about a current mental state inferentially, based on the cues that
are currently available to them: familiarity for the FOK and fluency/affect
for the coherence judgments. In Thompson’s (2009) framework, both are
similar to judgments of solvability (JOS), which are prospective judgments
about whether the participant would be able to solve the problem at hand.
Moreover, other types of metacognitive judgments have been shown to
be sensitive to a slightly different type of coherence. In many classic reason-
ing problems, participants are asked to make inferences about two types of
trials: trials that are congruent, in which the answers based on logic or prob-
ability are the same as those based on type 1 outputs, such as representative-
ness or beliefs, and incongruent ones, such as those illustrated in Examples
11–14. It has been known for a long time that people tend to perform more
poorly on conflict than nonconflict problems (Evans et al., 1983); they also
62 Valerie A. Thompson

take longer to respond (Bonner & Newell, 2010; Thompson et al., 2011)
and are less confident (De Neys et al., 2011; Thompson et al., 2011). More-
over, it appears that people are sensitive to the conflicting information,
regardless of which answer they give (De Neys & Franssens, 2009; De
Neys & Glumicic, 2008). Finally, conflict appears to produce a mild state
of arousal, as measured by skin conductance responses (De Neys,
Moyens, & Vansteenwegen, 2010).
Indeed, Koriat (2012) had recently argued that coherence is the major
determinant of confidence and is also the proximal cause of fluency effects.
The self-consistency model (SCM) applies to situations where one must
make a choice between two alternatives. People are assumed to gather infor-
mation about those alternatives as the basis of a decision; confidence in the
decision will reflect the relative number of pros and cons favoring the chosen
option, regardless of the importance of those considerations. On this view,
fluency reflects differences in self-consistency, such that choices with a high
degree of consistency are made fluently relative to less consistent ones.

4.4. Summary
In the preceding text, I have argued that important classes of intuitive judg-
ment are those that monitor cognitive processes, including intuitive ones. As
we normally cannot “see” how these cognitive processes work, judgments
about how well they have functioned must be made inferentially using cues
such as fluency, familiarity, and coherences. At this point, however, it is not
clear to what extent metacognitive judgments share properties, such as speed
or autonomy, with other types of intuitions, nor the extent to which they are
subserved by similar cognitive processes.

5. INTUITIONS AS FEELINGS
Although the discussion so far has focused on cognitive models of
intuition, there is increasing acknowledgment that emotions are integral
to intuitive choices. First, as Glöckner and Witteman (2010) argued, intu-
itions are often experienced as feelings of liking and disliking that may be
learned by conditioning or associations, as in the “mere exposure” effect
(Zajonc, 1980), or other learning mechanism. Indeed, although it is com-
mon to ask reasoners to express answers to logic or probability problems
as judgments of validity or probability, it is possible to measure affective
responses to such stimuli. In an interesting series of experiments,
Morsanyi and Handley (2012) showed that people “like” valid conclusions
What Intuitions Are. . . and Are Not 63

more than they do invalid ones, although the information on which these
judgments are based is controversial (Klauer & Singmann, 2012).
In addition, as was discussed in the section on attribute substitution,
affective experience, such as disgust and outrage, can often form the basis
of judgments: the so-called affect heuristic (Finucane et al., 2000). Thus,
for example, one may tend to overestimate the risk of negative outcomes
and underestimate the risk of positive ones. Moreover, temporarily experi-
enced affective states can influence a wide range of judgments, including
assessments of risk (see Zeelenberg, Nelissen, Breugelmans, & Pieters,
2008). Haidt (2012) had written extensively on the role of emotional expe-
riences as disgust and disrespect in judgments of morality, such that acts that
are perceived as disgusting (such as using the flag to clean a toilet) are often
perceived to be morally wrong, even though no one may be harmed by
them. In this view, emotional experiences are inextricably integrated with
the cognitive processes that give rise to intuitive judgments. Zeelenberg
et al. (2008) presented a similar argument that emotions implicitly prioritize
and activate goals, that is, that states of emotion such as fear and anger moti-
vate behavior.
Finally, Topolinski and Strack (2009c) had shown that priming positive
emotions can create the experience of intuition. These experiments used the
Remote Associates Test in which participants were asked to judge whether
triads of words were coherent. People were more likely to judge a triad
coherent if it was accompanied by a positive emotional experience. For
example, a triad containing positive words, such as “fresh, holy, liquid,”
was more likely to be judged coherent than “salt, drown, rain,” although
each has the same target word, “water.” Similarly, words that were preceded
by a subliminally presented happy face were judged coherent more often
than those primed with a sad face.

6. SUMMARY

1. In this chapter, I have discussed four different theories about the origins
of intuitive theories. Dual-process theories articulate a useful architec-
ture for predicting and explaining the interaction between intuitive, type
1 processes and analytic, type 2 processes. This architecture is missing
from the theories that specify intuition as a product of memory, which
would benefit from a better-defined theory of how analytic processes
shape, interact with, and modify the memory processes under
64 Valerie A. Thompson

consideration. Conversely, dual-process theories have often tried to


define intuitive, type 1 processes in terms of the qualities of the processes
(such as fast, parallel, and autonomous), but have made not made many
attempts to specify the exact workings of those processes. Thus, they
would gain specificity by incorporating the work done by Betsch,
Glöckner, Reyna, Klein, and others who have provided well-worked
out models of type 1 processes.
Similarly, on the view that understanding how cognitive processes
are monitored and how that monitoring controls subsequent behavior
(such as the initiation of analytic thinking), all theories would benefit
from adopting a metacognitive approach to explain relative levels of sat-
isfaction with intuitive judgments. In the case of the PCS model
(Glöckner & Betsch, 2010), does satisfaction with an answer vary with
something other than coherence, such as the number of times a stimulus
has been exposed or the pattern with which the information is presented?
In terms of recognition memory, what cues a feeling of wrongness about
the current situation that would make an expert firefighter or physician
rethink their initial analysis? An important and heretofore neglected con-
tribution to those feelings may be the absence of coherence, that is, that
there is something in the environment that is inconsistent with expec-
tations developed from prior experiences or representations of similar
past event.
Finally, all of these approaches need to give attention to the role that
emotional responses may have in (a) initiating cognition, (b) shaping
their outcomes, and (c) evaluating satisfaction with those outcomes.
2. The forgoing description raises the question of whether there are many
types of intuitions or just many theories of intuition. Although it may be
premature to draw conclusions, the mechanisms that are suggested to
give rise to intuitive experiences seem different enough on their surface
to preclude usefully grouping them all under one umbrella. For example,
the processes that give rise to the intuition that Linda is a feminist may
share much in common with the recognition memory processes that
were described in the preceding text, but the process by which they
become a judgment of probability requires an additional step not
required for most recognition-based judgments. Nor is it clear that
the deliberate learning engaged to enable expert intuitions will produce
memory representations that operate by the same principles that underlie
the implicit learning described by Betsch et al. (2001). Finally, a meta-
cognitive intuition is based on the operation of other cognitive
What Intuitions Are. . . and Are Not 65

processes, such as whether they were fluent or not, and not on the con-
tents of the processes (Koriat, 2007), so it seems reasonable to posit that
they will operate according to different principles than those that accu-
mulate information and give rise to a preference for peripherally encoded
stock prices.
Nonetheless, this is all speculative and it may well turn out to be a
common, underlying set of mechanisms for all of these processes. Until
then, it would be more productive to study intuitions by studying the
cognitive and emotional processes that give rise to them, rather than try-
ing to define them in terms of encompassing definitions. The most obvi-
ous reason is that some of these processes may have quite different
qualities than others. For example, implicit encoding is central to
Glöckner and Betsch’s (2010) work but is less clearly relevant to
Reyna’s (2012) fuzzy-trace theory. Moreover, qualities such as
“implicit” and “unconscious” and “fast” are not categorical properties;
they exist on a continuum, which creates problems for theories that try
to define intuitive or type 1 processes in these terms (Keren & Schul,
2009; Osman, 2004).
Take, for example, the quality of “knowing without knowing how,”
which is often provided as the sine qua non of intuitive experience. Many
of the responses thought to arise from intuitive responses may reflect
some degree of insight into their operation. As a case in point, on
base-rate tasks, similar to Example #12 in the preceding text, a response
based on a stereotype might be given after some reflection, because the
stereotype was deemed more compelling than the base-rate information
(Pennycook & Thompson, 2012). Indeed, whenever a reasoner initiates
an override of an intuitive response, it seems likely that they have under-
stood the basis of it (Jack really does seem to resemble an engineer) in
order to resolve the conflict. In others, such as the Linda problem, it
seems likely that people will be able to tell you that the reason for their
answer was that Linda resembles a feminist. On the other hand, they may
not be able to tell you about the “bait and switch” that their system
played in substituting this judgment for the probability judgment.
Indeed, people are very good at making up reasons for their choices,
but not necessarily at pinpointing the variables that contributed to them.
Wason and Evans (1975), for example, asked participants to justify their
answers to Wason’s four-card selection task. On this task, participants are
asked which of four cards they need to select to prove the truth or falsity
of a conditional rule, such as “if a card has a vowel on one side, then it has
66 Valerie A. Thompson

an even number on the other.” Given a choice of cards marked “A,”


“K,” “4,” and “7,” the modal response is “A” and “4,” cards that match
the value named in the rule (Evans, 1998), rather than “A” and “7,”
which would prove the rule false. Participant’s justifications for their
answers mentioned many factors, often logically valid ones, but never
the variable that has been shown to reliably determine patterns, namely,
whether the card in question matches the values mentioned in the rule.
Haidt (2012) had made similar observations of people who try to defend
their view that a disgusting/disrespectful act, such as cleaning a toilet
with the flag, is immoral: people will provide any number of reasons
but, when challenged to show that someone is harmed by the act, revert
to saying “it’s just wrong.” Thus, in some cases, the description “know-
ing without knowing” and others, such as “fast, implicit, and parallel,”
may be appropriate, while in others, they may not be. A better approach
is to define the processes that give rise to the intuition and to then deter-
mine which of these descriptions characterize the outputs of those
processes.
3. Finally, it is worthwhile to consider the limits of intuitive processes and
the relative benefits of their more deliberate counterparts. As several the-
orists have argued (Epstein, 2010; Evans, 2010a, 2010b; Stanovich,
2004), the origins of many type 1/intuitive judgments lie in systems
and structures that are evolutionarily old and that are thus shared with
other mammals. However, as Stanovich (2004) had eloquently argued
humans are the only mammals that are capable of reprogramming the
outputs of these processes to meet current rather than evolutionary pri-
orities. That is, we are capable (if not always disposed) to override the
output of TASS and substitute a different goal or preference. In some
cases, the functioning of modern society demands it: members of a jury
must put aside prior belief (e.g., about police, blaming victim, and ste-
reotypes) and make decisions based on evidence. People can, and do,
overcome default preferences in terms of diet, exercise, and other risky
behaviors or consider taking long-term gains instead of short-term pay-
offs. Moreover, whereas TASS are efficient and powerful, they are not
flexible, so that type 2 processes are needed in novel situations that
require the flexible application of knowledge.
A more pessimistic view is that rational thought processes evolved to
justify or rationalize our intuitions (Haidt, 2012) or to persuade others to
our point of view (Mercier & Sperber, 2011). Regardless of the reasons
for their evolution, the ability to engage in type 2 nonetheless allows
What Intuitions Are. . . and Are Not 67

many useful functions that may be of personal utility (Darmstadter,


2013). For example, rational processes permit imagination and counter-
factual thinking (Hogarth, 2001) and hypothetical thinking and
metarepresentation, useful skills in problem solving, anticipating future
events, and learning from past errors. These skills can be deployed even
when the basis of thought is largely intuitive, in order to model the
likely consequence of action and to seek alternatives if that one fails
(Klein, 1999).

6.1. Conclusions
Intuitions are a complex set of phenomena subserved by a variety of cogni-
tive and affective processes. These processes may or may not have sufficient
overlap to allow the use of “intuition” as a unitary construct, nor is it nec-
essarily that case that the qualities of the judgments that arise from these pro-
cesses (e.g., fast, compelling, and “knowing without knowing how”) will
characterize all forms of intuitive judgment. Thus, rather than trying to
define intuition in terms of the qualities of the outputs, it is argued that a
more useful approach is to specify the processes that give rise to intuitive
judgments and to then ascertain the qualities of those outputs.

ACKNOWLEDGMENT
I would like to thank Jamie Campbell for many helpful comments on an earlier draft of this
chapter and Nicole Therriault for technical assistance with the final version of the manuscript.

REFERENCES
Acker, F. (2008). New findings on unconscious versus conscious thought in decision making:
Additional empirical data and meta-analysis. Judgment and Decision Making, 3(4),
292–303.
Ackerman, R., & Koriat, A. (2011). Response latency as a predictor of the accuracy of chil-
dren’s reports. Journal of Experimental Psychology Applied, 17(4), 406–417. http://dx.doi.
org/10.1037/a0025129.
Ambady, N. (2010). The perils of pondering: Intuition and thin slice judgments. Psychological
Inquiry, 21(4), 271–278. http://dx.doi.org/10.1080/1047840X.
Beatty, E., & Thompson, V. A. (2012). Effects of perspective and belief on analytic reasoning
in a scientific reasoning task. Thinking & Reasoning, 18(4), 441–460. http://dx.doi.org/
10.1080/13546783.2012.687892.
Benjamin, A. S., Bjork, R. A., & Schwartz, B. L. (1998). The mismeasure of memory: When
retrieval fluency is misleading as a metamnemonic index. Journal of Experimental
Psychology General, 127, 55–68.
Betsch, T. (2008). The nature of intuition and its neglect in research on judgment and
decision making. In H. Plessner, C. Betsch, & T. Betsch (Eds.), Intuition in judgment
and decision making (pp. 3–22). New York, NY: Psychology Press.
68 Valerie A. Thompson

Betsch, T., & Glöckner, A. (2010). Intuition in judgment and decision making: Extensive
thinking without effort. Psychological Inquiry, 21, 1–16. http://dx.doi.org/10.1080/
1047840X.2010.517737.
Betsch, T., Kaufmann, M., Lindow, F., Plessner, H., & Hoffmann, K. (2006). Different prin-
ciples of information aggregation in implicit and explicit attitude formation. European
Journal of Social Psychology, 36, 887–905. http://dx.doi.org/10.1002/ejsp.328.
Betsch, T., Plessner, H., Schwieren, C., & Gütig, R. (2001). I like it but I don’t know why:
A value-account approach to implicit attitude formation. Personality and Social Psychology
Bulletin, 27(2), 242–253. http://dx.doi.org/10.1177/0146167201272009.
Bolte, A., & Goschke, T. (2005). On the speed of intuition: Intuitive judgments of semantic
coherence under different response deadlines. Memory & Cognition, 33(7), 1248–1255.
http://dx.doi.org/10.3758/BF03193226.
Bonner, C., & Newell, B. R. (2010). In conflict with ourselves? An investigation of heuristic
and analytic processes in decision making. Memory & Cognition, 38(2), 186–196. http://
dx.doi.org/10.3758/MC.38.2.186.
Brewer, W. F., & Sampaio, C. (2006). Processes leading to confidence and accuracy in sen-
tence recognition: A metacognitive approach. Memory, 14, 540–552. http://dx.doi.org/
10.1080/09658210600590302.
Busey, T. A., Tunnicliff, J., Loftus, G. R., & Loftus, E. (2000). Accounts of the confidence-
accuracy relation in recognition memory. Psychonomic Bulletin & Review, 7, 26–48.
Calderwood, R., Klein, G. A., & Crandall, B. W. (1988). Time pressure, skill, and move
quality in chess. The American Journal of Psychology, 101, 481–493. http://www.jstor.
org/stable/1423226.
Camerer, C., & Johnson, E. (1991). The process-performance paradox in expert judgment:
How can experts know so much and predict so badly? In K. A. Ericsson & J. Smith (Eds.),
Toward a general theory of expertise: Prospects and limits (pp. 195–217). New York, NY:
Cambridge University Press.
Chabris, C. F., & Hearst, E. S. (2003). Visualization, pattern recognition, and forward search:
Effects of playing speed and sight of the position on grandmaster chess errors. Cognitive
Science, 17, 637–648. http://dx.doi.org/10.1016/S0364-0213(03)00032-6.
Chabris, C. F., & Simons, D. J. (2010). The invisible gorilla (and other ways our intuitions deceive us).
New York, NY: Harmony Publishing.
Charness, N. (1991). Expertise in chess: The balance between knowledge and search. In
K. A. Ericsson & J. Smith (Eds.), Toward a general theory of expertise: Prospects and limits
(pp. 39–63). New York, NY: Cambridge University Press.
Chase, W. G., & Simon, H. A. (1973a). Perception in chess. Cognitive Psychology, 4(1), 55–81.
http://dx.doi.org/10.1016/0010-0285(73)90004-2.
Chase, W. G., & Simon, H. A. (1973b). The mind’s eye in chess. In W. G. Chase (Ed.), Visual
information processing (pp. 215–281). New York, NY: Academic press.
Daniel, D., & Klaczynski, P. A. (2006). Developmental and individual differences in condi-
tional reasoning: Effects of logic instructions and alternative antecedents. Child Develop-
ment, 77, 339–354. http://www.jstor.org/stable/3696473.
Darmstadter, H. (2013). Why do humans reason? Thinking and Reasoning, 13, 472–487.
http://dx.doi.org/10.1080/13546783.2013.802256.
Davis, D. A. (2009). How to help professionals maintain and improve their knowledge and
skills: Triangulating best practice in medicine. In K. A. Ericsson (Ed.), Development of
professional expertise: Toward measurement of expert performance and design of optimal learning
environments (pp. 180–202). New York, NY: Cambridge University Press.
De Neys, W. (2006). Dual processing in reasoning: Two systems but one reasoner. Psycho-
logical Science, 17, 428–433. http://dx.doi.org/10.1111/j.1467-9280.2006.01723.x.
De Neys, W., Cromheeke, S., & Osman, M. (2011). Biased but in doubt: Conflict and decision
confidence. PLoS ONE, 6, e15954. http://dx.doi.org/10.1371/journal.pone.0015954.
What Intuitions Are. . . and Are Not 69

De Neys, W., & Franssens, S. (2009). Belief inhibition during thinking: Not always
winning but at least taking part. Cognition, 113(1), 45–61. http://dx.doi.org/10.1016/
j.cognition.2009.07.009.
De Neys, W., & Glumicic, T. (2008). Conflict monitoring in dual process theories of reasoning.
Cognition, 106, 1248–1299. http://dx.doi.org/10.1016/j.cognition.2007.06.002.
De Neys, W., Moyens, E., & Vansteenwegen, D. (2010). Feeling we’re biased: Autonomic
arousal and reasoning conflict. Cognitive, Affective, & Behavioral Neuroscience, 10(2),
208–216. http://dx.doi.org/10.3758/CABN.10.2.208.
De Neys, W., Rossi, D., & Houdé, O. (2013). Bats, balls, and substitution sensitivity: Cog-
nitive misers are no happy fools. Psychonomic Bulletin & Reviewhttp://dx.doi.org/
10.3758/s13423-013-0384-5, Advance online publication.
Dijksterhuis, A., Bos, M. W., Nordgren, L. F., & Van Baaren, R. B. (2006). On making the
right choice: The deliberation-without-attention effect. Science, 311, 1005–1007. http://
dx.doi.org/10.1126/science.1121629.
Elqayam, S., & Evans, J. St. B. T. (2011). Subtracting ‘ought’ from ‘is’: Descriptivism versus
normativism in the study of the human thinking. Behavioral and Brain Sciences, 34(5),
233–248. http://dx.doi.org/10.1017/S0140525X1100001X.
Epstein, S. (2008). Intuition from the perspective of cognitive-experiential self-theory. In
H. Plessner, C. Betsch, & T. Betsch (Eds.), Intuition in judgment and decision making
(pp. 23–37). New York, NY: Psychology Press.
Epstein, S. (2010). Demystifying intuition: What it is, what it does, and how it does it. Psy-
chological Inquiry, 21(4), 295–312. http://dx.doi.org/10.1080/1047840X.2010.523875.
Ericsson, K. A. (2007). An expert-performance perspective on medical expertise: Study supe-
rior clinical performance rather than experienced clinicians!. Medical Education, 41,
1124–1130. http://dx.doi.org/10.1111/j.1365-2923.2007.02946.x.
Ericsson, K. A. (2009). Enhancing the development of professional performance:
Implications from the study of deliberate practice. In K. A. Ericsson (Ed.), The
development of professional expertise: Toward measurement of expert performance and design
of optimal learning environments (pp. 405–431). New York, NY: Cambridge
University Press.
Ericsson, K. A., Krampe, R. Th., & Tesch-Römer, C. (1993). The role of deliberate practice
in the acquisition of expert performance. Psychological Review, 100(3), 363–406. http://
dx.doi.org/10.1037/0033-295X.100.3.363.
Evans, J. St. B. T. (1998). Matching bias in conditional reasoning: Do we understand it after
25 years? Thinking & Reasoning, 4(1), 45–110. http://dx.doi.org/10.1080/135467898394247.
Evans, J. St. B. T. (2006). The heuristic–analytic theory of reasoning: Extension and evalu-
ation. Psychonomic Bulletin and Review, 13, 378–395. http://dx.doi.org/10.3758/
BF03193858.
Evans, J. St. B. T. (2007a). On the resolution of conflict in dual-process theories of reasoning.
Thinking & Reasoning, 13, 321–329. http://dx.doi.org/10.1080/13546780601008825.
Evans, J. St. B. T. (2007b). Hypothetical thinking: Dual processes in reasoning and judgement.
Hove, UK: Psychology Press.
Evans, J. St. B. T. (2010a). Intuition and reasoning: A dual process perspective. Psychological
Inquiry, 21(4), 313–326. http://dx.doi.org/10.1080/10478X.2010.521057.
Evans, J. St. B. T. (2010b). Thinking twice: Two minds in one brain. Oxford, UK: Oxford
University Press.
Evans, J. St. B. T., Barston, J., & Pollard, P. (1983). On the conflict between logic and belief
in syllogistic reasoning. Memory and Cognition, 11(3), 295–306. http://dx.doi.org/
10.3758/BF03196976.
Evans, J. St. B. T., & Curtis-Holmes, J. (2005). Rapid responding increases belief bias: Evi-
dence for the dual-process theory of reasoning. Thinking & Reasoning, 11(4), 382–389.
http://dx.doi.org/10.1080/13546780542000005.
70 Valerie A. Thompson

Evans, J. S. B. T., Newstead, S. E., Allen, J. L., & Pollard, P. (1994). Debiasing by instruction:
The case of belief bias. European Journal of Cognitive Psychology, 6, 263–285. http://dx.doi.
org/10.1080/09541449408520148.
Evans, J. St. B. T., & Stanovich, K. E. (2013). Dual-process theories of higher cognition:
Advancing the debate. Perspectives on Psychological Science, 8, 223–241. http://dx.doi.
org/10.1177/1745691612460685.
Finucane, M. L., Alhakami, A., Slovic, P., & Johnson, S. M. (2000). The affect heuristic in
judgments of risks and benefits. Journal of Behavioral Decision Making, 13, 1–17. http://dx.
doi.org/10.1002/(SICI)1099-0771(200001/03)13:1<1::AID-BDM333>3.0.CO;2-S.
Frederick, S. (2005). Cognitive reflection and decision making. Journal of Economic Perspectives,
19(4), 24–42. http://dx.doi.org/10.1257/089533005775196732.
Gigerenzer, G., Todd, P. M., & The ABC Research Group (1999). Simple heuristics that make
us smart. New York, NY: Oxford University Press.
Gladwell, M. (2005). Blink: The power of thinking without thinking. New York, NY: Little,
Brown and Company.
Glöckner, A., & Betsch, T. (2010). Accounting for critical evidence while being precise and
avoiding the strategy selection problem in a parallel constraint satisfaction approach:
A reply to Marewski. Journal of Behavioral Decision Making, 23, 468–472. http://dx.doi.
org/10.1002/bdm.688.
Glöckner, A., & Betsch, T. (2012). Decisions beyond boundaries: When more information is
processed faster than less. Acta Psychologica, 139, 532–542. http://dx.doi.org/10.1016/
j.actpsy.2012.01.009.
Glöckner, A., & Witteman, C. (2010). Beyond dual-process models: A categorisation of pro-
cesses underlying intuitive judgment and decision making. Thinking & Reasoning, 16(1),
1–25. http://dx.doi.org/10.1080/13546780903395748.
Gobet, F., & Simon, H. A. (1996). The roles of recognition processes and look-ahead search
in time-constrained expert problem solving: Evidence from grand-master-level chess.
Psychological Science, 7, 52–55. http://www.jstor.org/stable/40062907.
Haidt, J. (2012). The righteous mind: Why good people are divided by politics and religion. New
York, NY: Pantheon Books.
Hammond, K. R. (2010). Intuition, no!. . . Quasirationality, yes! Psychological Inquiry, 21(4),
327–337. http://dx.doi.org/10.1080/1047840X.
Handley, S. J., Newstead, S. E., & Trippas, D. (2011). Logic, beliefs, and instruction: A test of
the default interventionist account of belief bias. Journal of Experimental Psychology, 37(1),
28–43. http://dx.doi.org/10.1037/a0021098.
Heit, E., & Hayes, B. K. (2011). Predicting reasoning from memory. Journal of Experimental
Psychology General, 140(1), 76–101. http://dx.doi.org/10.1037/a0021488.
Heit, E., & Hayes, B. K. (2013). How similar are recognition memory and inductive reason-
ing? Memory & Cognition, 41, 781–795. http://dx.doi.org/10.3758/s13421-013-0297-6.
Heit, E., Rotello, C. M., & Hayes, B. K. (2012). Relations between memory and reasoning.
Psychology of Learning and Motivation: Advances in Research and Theory, 57, 57–101. http://
dx.doi.org/10.1016/B978-0-12-394293-7.00002-9.
Hélie, S., & Sun, R. (2010). Incubation, insight, and creative problem solving: A unified
theory and a connectionist model. Psychological Review, 117, 994–1024. http://dx.doi.
org/10.1037/a0019532.
Hendricks, M. A., Conway, C. M., & Kellogg, R. T. (2013). Using dual-task methodology to
dissociate automatic from nonautomatic processes involved in artificial grammar learning.
Journal of Experimental Psychology Learning, Memory, & Cognition, 39, 1491–1500. Advance
online publication, http://www.ncbi.nlm.nih.gov/pubmed/23627281.
Hogarth, R. M. (2001). Educating intuition. Chicago, IL: University of Chicago Press.
Hogarth, R. M. (2010). Intuition: A challenge for psychological research on decision making.
Psychological Inquiry, 21(4), 338–353. http://dx.doi.org/10.1080/1047840X.2010.520260.
What Intuitions Are. . . and Are Not 71

Jacoby, L. L., Kelley, C. M., & Dywan, J. (1989). Memory attributions. In H. L. Roediger &
F. I. M. Craik (Eds.), Varieties of memory and consciousness: Essays in honour of Endel Tulving
(pp. 391–422). Hillsdale, NJ: Erlbaum.
Janoff-Bulman, R. (Ed.). (2010). Special issue on intuition [special issue]. Psychological
Inquiry, 21(4), 271–398.
Kahneman, D. (2003). A perspective on judgement and choice: Mapping bounded
rationality. American Psychologist, 58(9), 698–720. http://dx.doi.org/10.1037/0003-
066X.58.9.697.
Kahneman, D. (2011). Thinking fast and slow. Toronto, ON: Doubleday Canada.
Kahneman, D., & Klein, G. (2009). Conditions for intuitive expertise: A failure to disagree.
American Psychologist, 64(6), 515–526. http://dx.doi.org/10.1037/a0016755.
Kahneman, D., Schkade, D., & Sunstein, C. (1998). Shared outrage and erratic awards:
The psychology of punitive damages. Journal of Risk and Uncertainty, 16, 49–86.
http://dx.doi.org/10.1023/A:1007710408413.
Kahneman, D., & Tversky, A. (1973). On the psychology of prediction. Psychological Review,
80(4), 237–251. http://dx.doi.org/10.1037/h0034747.
Kelley, C. M., & Jacoby, L. L. (1993). The construction of subjective experience: Memory
attributions. In M. Davies & G. W. Humphreys (Eds.), Consciousness: Psychological and
philosophical essays (readings in mind and language) (pp. 74–89). Malden, MA: Blackwell.
Kelley, C. M., & Jacoby, L. L. (1996). Memory attributions: Remembering, knowing, and
feeling of knowing. In L. M. Reder (Ed.), Implicit memory and metacognition (pp. 287–308).
Hillsdale, NJ: Erlbaum.
Keren, G., & Schul, Y. (2009). Two is not always better than one: A critical evaluation of
two-system theories. Perspectives on Psychological Science, 4, 500–533. http://dx.doi.org/
10.1111/j.1745-6924.2009.01164.x.
Klauer, K. C., & Singmann, H. (2012). Does logic feel good? Testing for intuitive detection
of logicality in syllogistic reasoning. Journal of Experimental Psychology Learning, Memory,
and Cognition, 39, 1265–1273. http://dx.doi.org/10.1037/a0030530.
Klein, G. A. (1999). Sources of power: How people make decisions. Cambridge, MA: MIT Press.
Klein, G. A., Calderwood, R., & Clinton-Cirocco, A. (1986). Rapid decision making on the
fireground. In Proceedings of the Human Factors and Ergonomics Society 30th annual meeting,
Vol. 1, (pp. 576–580).
Koriat, A. (1993). How do we know that we know? The accessibility model of the feeling of
knowing. Psychological Review, 100, 609–639. http://dx.doi.org/10.1037/0033-
295X.100.4.609.
Koriat, A. (1995). Dissociating knowing and the feeling of knowing: Further evidence for the
accessibility model. Journal of Experimental Psychology General, 124(3), 311–333. http://dx.
doi.org/10.1037/0096-3445.124.3.311.
Koriat, A. (1997). Monitoring one’s knowledge during study: A cue-utilization approach to
judgments of learning. Journal of Experimental Psychology General, 126, 349–370. http://
dx.doi.org/10.1037/0096-3445.126.4.349.
Koriat, A. (2007). Metacognition and consciousness. In P. D. Zelazo, M. Moscovitch, &
E. Thompson (Eds.), The Cambridge handbook of consciousness (pp. 289–325).
Cambridge, UK: Cambridge University Press.
Koriat, A. (2012). The self-consistency model of subjective confidence. Psychological Review,
119(1), 80. http://dx.doi.org/10.1037/a0025648.
Koriat, A., Ackerman, R., Lockl, K., & Schneider, W. (2009). The memorizing-effort
heuristic in judgment of memory: A developmental perspective. Journal of Experimental
Child Psychology, 102, 265–279. http://dx.doi.org/10.1016/j.jecp.2008.10.005.
Koriat, A., Bjork, R. A., Sheffer, L., & Bar, S. (2004). Predicting one’s own forgetting: The
role of experience-based and theory-based processes. Journal of Experimental Psychology
General, 133(4), 643–656.
72 Valerie A. Thompson

Koriat, A., & Levy-Sadot, R. (1999). Processes underlying metacognitive judgments:


Information-based and experience-based monitoring of one’s own knowledge. In
S. Chaiken & Y. Trope (Eds.), Dual process theories in social psychology (pp. 483–502).
New York, NY: Guilford Publications.
Koriat, A., & Levy-Sadot, R. (2001). The combined contributions of the cue-familiarity and
accessibility heuristics to feelings of knowing. Journal of Experimental Psychology Learning,
Memory, and Cognition, 27(1), 34–53. http://dx.doi.org/10.1037/0278-7393.27.1.34.
Lassiter, G. D., Lindberg, M. J., González-Vallejo, C., Bellezza, F. S., & Phillips, N. D.
(2009). The deliberation-without-attention effect evidence for an artifactual interpreta-
tion. Psychological Science, 20(6), 671–675. http://dx.doi.org/10.1111/j.1467-
9280.2009.02347.x.
Markovits, H., Venet, M., Janveau-Brennan, G., Malfait, N., Pion, N., & Vadeboncoeur, I.
(1996). Reasoning in young children: Fantasy and information retrieval. Child Develop-
ment, 67, 2857–2872.
Matvey, G., Dunlosky, J., & Guttentag, R. (2001). Fluency of retrieval at study affects
judgements of learning (JOLs): An analytic or nonanalytic basis for JOLs? Memory and
Cognition, 29, 222–233. http://dx.doi.org/10.3758/BF03194916.
McMackin, J., & Slovic, P. (2000). When does explicit justification impair decision making?
Applied Cognitive Psychology, 14(6), 527–541. http://dx.doi.org/10.1002/1099-0720
(200011/12)14:6<527::AID-ACP671>3.0.CO;2-J.
Mednick, S. A., & Mednick, M. T. (1967). Examiner’s manual, remote associates test. Boston,
MA: Houghton Mifflin.
Mercier, H., & Sperber, D. (2011). Why do humans reason? Arguments for an argumentative
theory. Behavioral and Brain Sciences, 34(2), 57. http://dx.doi.org/10.1017/
S0140525X10000968.
Morsanyi, K., & Handley, S. J. (2012). Logic feels so good-I like it! Evidence for intuitive
detection of logicality in syllogistic reasoning. Journal of Experimental Psychology Learning,
Memory, and Cognition, 38(3), 596–616. http://dx.doi.org/10.1037/a0026099.
Moxley, J. H., Ericsson, K. A., Charness, N., & Krampe, R. T. (2012). The role of intuition
and deliberative thinking in experts’ superior tactical decision-making. Cognition, 124,
72–78. http://dx.doi.org/10.1016/j.cognition.2012.03.005.
Mueller, M. L., Tauber, S. K., & Dunlosky, J. (2012). Contributions of beliefs and processing
fluency to the effect of relatedness on judgments of learning. Psychonomic Bulletin &
Review, 19, 1–7. http://dx.doi.org/10.3758/s13423-012-0343-6.
Nelson, T. O., & Narens, L. (1990). Metamemory: A theoretical framework and new find-
ings. The Psychology of Learning and Motivation, 26, 125–173.
Newell, B. R., & Rakow, T. (2011). Revising beliefs about the merits of unconscious
thought: Evidence in favor of the null hypothesis. Social Cognition, 29, 711–772.
Newell, B. R., Wong, K. Y., Cheung, J. C., & Rakow, T. (2009). Think, blink or sleep
on it? The impact of modes of thought on complex decision making. The
Quarterly Journal of Experimental Psychology, 62(4), 707–732. http://dx.doi.org/
10.1080/17470210802215202.
Newstead, S., Pollard, P., Evans, J., & Allen, J. (1992). The source of belief bias effects in syllogistic
reasoning. Cognition, 45, 257–284. http://dx.doi.org/10.1016/0010-0277(92)90019-E.
Osman, M. (2004). An evaluation of dual-process theories of reasoning. Psychonomic Bulletin &
Review, 11(6), 988–1010. http://dx.doi.org/10.3758/BF03196730.
Pennycook, G., & Thompson, V. A. (2012). Reasoning with base rates is routine, relatively
effortless, and context dependent. Psychonomic Bulletin & Review, 19(3), 528–534. http://
dx.doi.org/10.3758/s13423-012-0249-3.
Pennycook, G., Trippas, D., Handley, S., & Thompson, V. A. (2013). Base rates: Both
neglect and intuitive. Journal of Experimental Psychology: Learning, Memory, and Cog-
nition. Published online before print, http://dx.doi.org/10.1037/a0034887.
What Intuitions Are. . . and Are Not 73

Plessner, H., Betsch, C., & Betsch, T. (2008). Intuition in judgement and decision making. New
York, NY: Psychology Press.
Price, M. C., & Norman, E. (2008). Intuitive decisions on the fringes of consciousness: Are
they conscious and does it matter? Judgment and Decision Making, 3, 28–41.http://journal.
sjdm.org/bb3.pdf.
Reber, A. S. (1993). Implicit learning and tacit knowledge: An essay on the cognitive unconscious.
New York, NY: Oxford University Press.
Reder, L. M., & Ritter, F. E. (1992). What determines initial feeling of knowing? Familiarity
with question terms, not with the answer. Journal of Experimental Psychology Learning,
Memory, and Cognition, 18(3), 435–451. http://dx.doi.org/10.1037/0278-7393.18.3.435.
Reyna, V. F. (2012). A new intuitionism: Meaning, memory, and development in fuzzy-
trace theory. Judgment and Decision Making, 7(3), 332–359.
Reyna, V. F., & Brainerd, C. J. (1991). Fuzzy-trace theory and children’s acquisition of sci-
entific and mathematical concepts. Learning and Individual Differences, 3, 27–60. http://dx.
doi.org/10.1016/1041-6080(91)90003-J.
Reyna, V. F., & Lloyd, F. J. (2006). Physician decision-making and cardiac risk: Effects of
knowledge, risk perception, risk tolerance, and fuzzy processing. Journal of Experimental
Psychology Applied, 12, 179–195. http://dx.doi.org/10.1037/1076-898X.12.3.179.
Schnall, S., Haidt, J., Clore, G. L., & Jordan, A. H. (2008). Disgust as embodied moral judg-
ment. Personality and Social Psychology Bulletin, 34, 1096–1109. http://dx.doi.org/
10.1177/0146167208317771.
Schwartz, B. L., Benjamin, A. S., & Bjork, R. A. (1997). The inferential and experiential basis
of metamemory. Current Directions in Psychological Science, 6, 132–137. http://dx.doi.org/
10.1111/1467-8721.ep10772899.
Schwarz, N. (2004). Metacognitive experiences in consumer judgment and decision making. Jour-
nal of Consumer Psychology, 14, 332–348. http://dx.doi.org/10.1207/s15327663jcp1404_2.
Schwarz, N., & Vaughn, L. A. (2002). The availability heuristic revisited: Ease of recall and
content of recall as distinct sources of information. In T. Gilovich, D. Griffin, &
D. Kahneman (Eds.), Heuristics and biases: The psychology of intuitive judgements
(pp. 103–119). New York, NY: Cambridge University Press.
Shah, A. K., & Oppenheimer, D. M. (2008). Heuristics made easy: An effort-reduction
framework. Psychological Bulletin, 134(2), 207–222. http://dx.doi.org/10.1037/0033-
2909.134.2.207.
Shynkaruk, J. M., & Thompson, V. A. (2006). Confidence and accuracy in deductive rea-
soning. Memory & Cognition, 34(3), 619–632. http://dx.doi.org/10.3758/BF03193584.
Sinclair, M. (2010). Misconceptions about intuition. Psychological Inquiry, 21(4), 378–386.
http://dx.doi.org/10.1080/1047840X.2010.523874.
Sloman, S. A. (1996). The empirical case for two systems of reasoning. Psychological Bulletin,
119, 3–22. http://dx.doi.org/10.1037/0033-2909.119.1.3.
Sloman, S. A. (2002). Two systems of reasoning. In T. Gilovich, D. Griffin, & D. Kahneman
(Eds.), Heuristics and biases: The psychology of intuitive judgment (pp. 379–398). New York,
NY: Cambridge University Press.
Stanovich, K. E. (1999). Who is rational? Studies of individual differences in reasoning. New York,
NY: Psychology Press.
Stanovich, K. E. (2004). Metarepresentation and the great cognitive divide: A commentary
on Henriques’ “Psychology Defined” Journal of Clinical Psychology, 60(12), 1263–1266.
http://dx.doi.org/10.1002/jclp.20070.
Stanovich, K. E. (2011). Rationality and the reflective mind. New York, NY: Oxford University
Press.
Stanovich, K. E., & West, R. F. (2008). On the relative independence of thinking biases and
cognitive ability. Journal of Personality and Social Psychology, 94, 672–695. http://dx.doi.
org/10.1037/0022-3514.94.4.672.
74 Valerie A. Thompson

Tetlock, P. E. (2005). Expert political judgment: How good is it? How can we know? Princeton, NJ:
Princeton University Press.
Thompson, V. A. (2009). Dual process theories: A metacognitive perspective. In J. Evans &
K. Frankish (Eds.), In two minds: Dual processes and beyond (pp. 171–195). New York, NY:
Oxford University Press.
Thompson, V. A. (2013). Why it matters: The implications of autonomous processes for
dual-process theories—Commentary on Evans & Stanovich (2013). Perspectives on Psy-
chological Science, 8, 253–256. http://dx.doi.org/10.1177/1745691613483476.
Thompson, V. A., Evans, J. St. B. T., & Campbell, J. I. D. (2013). Matching bias on selection
task: It’s fast and it feels good. Thinking & Reasoning, 13, 431–452, http://dx.doi.org/
10.1080/13546783.2013.820220.
Thompson, V. A., Evans, J. St. B. T., & Handley, S. J. (2005). Persuading and dissuading by
conditional argument. Journal of Memory and Language, 53, 238–257. http://dx.doi.org/
10.1016/j.jml.2005.03.001.
Thompson, V. A., & Morsanyi, K. (2012). Analytic thinking: Do you feel like it? Mind &
Society, 11, 93–105. http://dx.doi.org/10.1007/s11299-012-0100-6.
Thompson, V. A., Prowse-Turner, J. A., & Pennycook, G. (2011). Intuition, reason, and
metacognition. Cognitive Psychology, 63(3), 107–140. http://dx.doi.org/10.1016/j.
cogpsych.2011.06.001.
Thompson, V. A., Prowse-Turner, J., Pennycook, G., Ball, L., Barak, H., Yael, O., et al.
(2013). The role of answer fluency and perceptual fluency as metacognitive cues for ini-
tiating analytic thinking. Cognition, 128(2), 237–251. http://dx.doi.org/10.1016/j.
cognition.2012.09.012.
Thorsteinson, T. J., & Withrow, S. (2009). Does unconscious thought outperform conscious
thought on complex decisions? A further examination. Judgment and Decision Making,
4(3), 235–247.
Topolinski, S. (2011). A process model of intuition. European Review of Social Psychology,
22(1), 274–315. http://dx.doi.org/10.1080/10463283.2011.640078.
Topolinski, S., Likowski, K. U., Weyers, P., & Strack, F. (2009). The face of fluency: Seman-
tic coherence automatically elicits a specific pattern of facial muscle reactions. Cognition
and Emotion, 23(2), 260–271. http://dx.doi.org/10.1080/02699930801994112.
Topolinski, S., & Reber, R. (2010). Gaining insight into the “aha” experience. Current Directions
in Psychological Science, 19(6), 402–405. http://dx.doi.org/10.1177/0963721410388803.
Topolinski, S., & Strack, F. (2009a). The analysis of intuition: Processing fluency and affect in
judgements of semantic coherence. Cognition and Emotion, 23(8), 1465–1503. http://dx.
doi.org/10.1080/02699930802420745.
Topolinski, S., & Strack, F. (2009b). Scanning the “fringe” of consciousness: What is felt and
what is not felt in intuitions about semantic coherence. Conscious and Cognition, 18,
608–618. http://dx.doi.org/10.1016/j.concog.2008.06.002.
Topolinski, S., & Strack, F. (2009c). The architecture of intuition: Fluency and affect deter-
mine intuitive judgments of semantic and visual coherence and judgments of grammat-
icality in artificial grammar learning. Journal of Experimental Psychology General, 138,
39–63. http://dx.doi.org/10.1037/a0014678.
Tversky, A., & Kahneman, D. (1973). Availability: A heuristic for judging frequency and
probability. Cognitive Psychology, 5(2), 207–232. http://dx.doi.org/10.1016/0010-0285
(73)90033-9.
Tversky, A., & Kahneman, D. (1981). The framing of decisions and the psychology of
choice. Science, 211, 453–458. http://dx.doi.org/10.1126/science.7455683.
Tversky, A., & Kahneman, D. (1983). Extension versus intuitive reasoning: The conjunction
fallacy in probability judgment. Psychological Review, 90(4), 293–315. http://dx.doi.org/
10.1037/0033-295X.90.4.293.
What Intuitions Are. . . and Are Not 75

Vadeboncoeur, I., & Markovits, D. (1999). The effect of instructions and information
retrieval on accepting the premises in a conditional reasoning task. Thinking and Reason-
ing, 5(2), 97–113. http://dx.doi.org/10.1080/135467899394011.
Wason, P. C., & Evans, J. S. B. T. (1975). Dual processes in reasoning? Cognition, 3(2),
141–154. http://dx.doi.org/10.1016/0010-0277(74)90017-1.
Whittlesea, B. W. A., Jacoby, L. L., & Girard, K. (1990). Illusions of immediate memory: Evi-
dence of an attributional basis for feelings of familiarity and perceptual quality. Journal of
Memory & Language, 29, 716–732. http://dx.doi.org/10.1016/0749-596X(90)90045-2.
Whittlesea, B. W. A., & Leboe, J. P. (2003). Two fluency heuristics (and how to tell them
apart). The Journal of Memory and Language, 49, 62–79. http://dx.doi.org/10.1016/
S0749-596X(03)0000-3.
Wilson, T. D., & Schooler, J. W. (1991). Thinking too much: Introspection can reduce the
quality of preferences and decisions. Journal of Personality and Social Psychology, 60(2), 181.
http://dx.doi.org/10.1037/0022-3514.60.2.181.
Zajonc, R. B. (1980). Feeling and thinking: Preferences need no inferences. American Psy-
chologist, 35(2), 151. http://dx.doi.org/10.1037/0003-066X.35.2.151.
Zeelenberg, M., Nelissen, R. M., Breugelmans, S. M., & Pieters, R. (2008). On emotion
specificity in decision making: Why feeling is for doing. Judgment and Decision Making,
3(1), 18–27.

You might also like