You are on page 1of 24

Cognitive Science 43 (2019) e12735

© 2019 Cognitive Science Society, Inc. All rights reserved.


ISSN: 1551-6709 online
DOI: 10.1111/cogs.12735

On Fodor’s First Law of the Nonexistence of Cognitive


Science
Gregory L. Murphy
Department of Psychology, New York University
Received 6 November 2018; received in revised form 2 April 2019; accepted 4 April 2019

Abstract
In his enormously influential The Modularity of Mind, Jerry Fodor (1983) proposed that the
mind was divided into input modules and central processes. Much subsequent research focused on
the modules and whether processes like speech perception or spatial vision are truly modular.
Much less attention has been given to Fodor’s writing on the central processes, what would today
be called higher-level cognition. In “Fodor’s First Law of the Nonexistence of Cognitive
Science,” he argued that central processes are “bad candidates for scientific study” and would
resist attempts at empirical analysis. This essay evaluates his argument for this remarkable claim,
concluding that although central processes may well be “messier” than input modules, this does
not mean that they cannot be studied and understood. The article briefly reviews the scientific pro-
gress made in understanding central processes in the 35 years since the book was published,
showing that Fodor’s prediction is clearly falsified by massive advances in topics like decision
making and analogy. The essay concludes that Fodor’s Law was not based on a clear argument
for why the complexities of central systems could not be studied but was likely based on intu-
itions and preferences that were common in psychology at the time.

Keywords: Cognitive science; Jerry Fodor; Higher-level cognition; Philosophy of science;


Concepts

Jerry Fodor, who died on November 29, 2017, was arguably the pre-eminent modern
philosopher of cognitive science. Fodor helped to establish the validity of a functional
approach to the mind, fighting the tide of reductionism as the field of neuroscience grew
and became more successful. His essays in The Language of Thought (1975) and Repre-
sentations (1981) certainly had a great influence on me as a student and then as a
researcher in cognitive psychology. He also made significant contributions to

Correspondence should be sent to Gregory L. Murphy, Department of Psychology, New York University,
6 Washington Place, 8th floor, New York, NY 10003. E-mail: gregory.murphy@nyu.edu
2 of 24 G. L. Murphy / Cognitive Science 43 (2019)

psycholinguistics, imagery, semantics, concepts, and other topics in cognitive science and
the philosophy of mind.
Fodor’s most important contribution to psychology was The Modularity of Mind
(1983). Although his PhD was in philosophy, Fodor did not shy away from getting into
the empirical weeds of cognitive psychology. Indeed, he co-authored the first major text
of psycholinguistics (Fodor, Bever, & Garrett, 1974). In Modularity, he proposed that the
mind is organized into distinct input systems that each do a specific job, are mostly hard-
wired, and are innately determined. Exactly what those modules are is an empirical ques-
tion to be discovered, but commonly proposed modules include spatial vision, speech per-
ception, and object identification. However, input systems alone can only get you so far.
In order to decide what to do, make and carry out plans, and perform the sophisticated
behaviors humans do, Fodor proposed that people must employ central systems that take
in all the input, evaluate it, and then draw conclusions.
For example, if I read in the newspaper that traffic is going to be bad on Interstate 95
this weekend, and if someone tells me that there is construction on the Merritt Parkway,
and if on Saturday I see backups on roads leading to my local highway, I can put
together all this information from memory and different sensory sources to derive the
conclusion that I should not bother driving to New Haven after all. If we did not have
central systems that could integrate all the sources of our knowledge, we might only be
stimulus-response machines, unable to make a coherent plan or take into account infor-
mation learned long ago and under different circumstances.
Modularity was enormously influential and was widely read in graduate seminars and
courses in psychology.1 Furthermore, it generated an enormous amount of empirical work
devoted to testing whether various input systems were or were not modular according to
Fodor’s criteria. The main focus of much of that work was the criterion of information
encapsulation, the idea that input systems do fast computations because they are not
influenced by information from other modules or the central system. For example, visual
illusions still look like illusions even though you know what they should look like. Your
higher-level belief cannot influence the low-level perceptual processes that compute
length or direction of movement. This division of labor allows fast and accurate process-
ing in the vast majority of cases. The field of psycholinguistics in particular was almost
consumed with the question of whether speech comprehension or sentence parsing could
be influenced by top-down processes. For example, could a parsing decision be avoided
because the result would be semantically implausible? How and when would such seman-
tic considerations influence the parsing process? (See Ferreira & Nye, 2017, and Garrett,
2017, for recent reviews from a perspective sympathetic to modularity.)
My subjective impression of this aspect of the modularity hypothesis is that, like
almost every strong claim about psychology, it has all turned out to be more complex
than anyone thought. Some aspects of input systems do seem to be impenetrable by
knowledge, whereas other aspects are penetrable. . . depending on how strong and reliable
the knowledge is, when it is activated, and so on. However, this essay is not about the
modules themselves, which were almost the entire focus of psychological research on
modularity. Instead, this article considers the central systems. Fodor had a negative view
G. L. Murphy / Cognitive Science 43 (2019) 3 of 24

of the central systems—that is, negative from the perspective of developing a scientific
understanding of them. The modules act in predictable, lawlike ways, he thought, exactly
because they are separate, impenetrable devices. Once a process can be influenced by a
huge number of information sources and factors, it becomes more and more difficult to
analyze. This led him to doubt whether central systems could ever be understood.
As it is 35 years (at this writing) since Modularity appeared, it seems an appropriate
time to review and evaluate this argument now that we have had more decades of actual
research on central systems. Although there were many evaluations of his main modular-
ity proposal (usually taken on a domain-by-domain basis), there has been much less atten-
tion given to the final section of his book, which addresses many topics of interest to
cognitive science, including reasoning, planning, decision-making, problem solving, and
concepts. Did Fodor have good reason for making his argument? And has it stood the test
of time? This essay attempts to answer those questions (using a somewhat looser style
than in the usual Cognitive Science article, in part because this is very much Fodor’s own
way of writing). After Modularity’s publication, the Behavioral and Brain Sciences pub-
lished a precis of the book, along with multiple commentaries (Fodor, 1985, referred to
henceforth as the Pr ecis). Some of the commentators did address the final section of the
book, as I will note below. However, because of the strict space limits, they could not
discuss his argument in as much detail as I will. Furthermore, we now have the subse-
quent decades to act as an empirical test of Fodor’s predictions about the field.
I describe Fodor’s view next.

1. Characterizing the central systems

Fodor gives a number of reasons for believing that the central systems must exist (pp.
102 ff; unattributed page references are all to Modularity). One is a perhaps philosophical
concern, namely that different inputs are necessary to achieve fixation of belief (p. 102).
Given all its inputs, the organism wishes to arrive at a hypothesis of what the state of the
world is. It seems undeniable that you can hear something, see something, remember
something, and, with your knowledge of the world, draw an inference that integrates all
this knowledge (like my conclusion about driving to New Haven). Since language can
talk about information derived from multiple senses, Fodor argues (p. 102) that our lan-
guage system can draw on central systems. Finally, Fodor (p. 103) suggests that making
plans and decisions requires information from the senses combined with goals and utili-
ties. Since (on his hypothesis) the perceptual systems are not affected by goals or utilities,
there must be a broader system that combines the perceptual information with them to
develop plans or to seek out relevant information.
Not everyone agrees that central systems are completely non-modular. For example,
Howard Gardner in the Pr ecis suggested that mathematical and musical abilities might be
modules (and see discussion of ACT-R below). However, more recent proposals of such
things have tended to focus on limited, specific abilities being modules, such as numeros-
ity perception (Feigenson, Dehaene, & Spelke, 2004). Music ability as a whole might be
4 of 24 G. L. Murphy / Cognitive Science 43 (2019)

separate in some ways from other kinds of cognition, but knowledge of harmony or mem-
ory of compositions would not fit Fodor’s definition of input modules. Gardner aside,
even those who believe that some part of higher-level cognition is modular likely do not
believe that all of it is modular—that is, they probably believe in more general processes
of thinking, as discussed next.
The central systems, in Fodor’s view, are roughly the “sorts of systems that people
have in mind when they talk, pretheoretically, of such mental processes as thought and
problem-solving” (p. 103). Since the time of Fodor’s writing, the term higher-level cogni-
tion has come to refer to such things, normally including processes such as categoriza-
tion, decision making, planning, and discourse comprehension in addition to those Fodor
mentioned. According to Fodor, central systems are neither domain specific nor informa-
tionally encapsulated (pp. 103–104). They form hypotheses and beliefs about all kinds of
things, not just within a single domain (like spatial vision), and they (potentially) use all
the information that is known. There is no specific limit on which knowledge might be
used in order to form a belief. For example, in deciding whether an animal in the road at
night is a turkey, I could use my knowledge of global warming, dates of the local hunting
season, claims of neighbors to have seen turkeys, past experiences in seeing turkeys, and
so on. This property of isotropy, the potential relevance of wide sources of knowledge, is
characteristic to central systems, Fodor says.2 That is a large part of what he sees as their
problem.
Fodor points out that much reasoning seems to be analogical, using the example of sci-
entific theories as a prime case. This leads to his claim, which is worth quoting in full,

It is striking that, while everybody thinks that analogical reasoning is an important


ingredient in all sorts of cognitive achievements that we prize, nobody knows anything
about how it works. . .. I don’t think that this is an accident either. In fact, I should like
to propose a generalization; one which I fondly hope will some day come to be known
as “Fodor’s First Law of the Nonexistence of Cognitive Science.” It goes like this: the
more global (e.g., the more isotropic) a cognitive process is, the less anybody under-
stands it. Very global processes, like analogical reasoning, aren’t understood at all. (p.
107)

(This raises the point that Fodor’s law is not about whether the field of Cognitive
Science exists or will exist, as one might think, but is a claim about the study of higher-
level cognition.)
On what basis does Fodor say that central processes are not understood? Of course, he
was speaking from the perspective of 1983 (or however earlier he wrote this section of
the book), when higher-level cognition was not as well studied as it has been now.
Nonetheless, one might have expected some kind of review of the literature or criticism
of extant theories. He does provide (p. 116) an argument that the theoretical framework
of schemata or frames (e.g., Schank & Abelson, 1977) does not provide an answer to the
problem of isotropy. This argument seems correct, but it isn’t clear that this framework
was intended to solve that problem. It was designed to structure complex knowledge so
G. L. Murphy / Cognitive Science 43 (2019) 5 of 24

that relations between elements would be specified, in contrast to unstructured associative


representations.
Fodor also draws evidence for his Law from evidence regarding brain areas that under-
lie input systems versus central systems. A then-recent issue of Scientific American
devoted an issue to the brain, covering the neuroscience of perception and language. “But
there is nothing on the neuropsychology of thought—presumably because nothing is
known about the neuropsychology of thought. I am suggesting that there is a good reason
why nothing is known about it—namely, that there is nothing to know about it” (p. 119).
Whereas input systems have consistent brain-behavior relations, central systems are (by
necessity) represented by connections that go all over the place, so that the different
sources of information can be integrated. Hence, there is “no stable neural architecture to
write Scientific American articles about” (p. 119).
I am second to no one in my admiration for the editors of Scientific American, who
have long done a fine job of communicating science to the people. However, it is remark-
able that the absence of an article on a given topic in Scientific American should be used
as evidence for absence of actual scientific knowledge about a topic, much less a predic-
tion that in the future such knowledge will not be gained (Pr ecis commentator Robert
Sternberg noted the same).3 In recent years, there have certainly been many, many arti-
cles on the neural bases of higher-level cognition, for example, the relevance of the fron-
tal lobes to planning and decision making (e.g., Damasio, 1994; Glimcher & Rustichini,
2004; among many others). Similarly, the neuroscience of concepts now includes detailed
hypotheses about how features of concepts are distributed across the brain but are all con-
nected to a conceptual “hub” in the anterior temporal lobe (Patterson, Nestor, & Rogers,
2007; Rogers et al., 2004). It turns out that everything is not actually connected to every-
thing, and there is structure even in the realm of knowledge representation. The role of
the basal ganglia in assigning value in choices is also the topic of dozens of articles. At
least for repeated choices, the brain stores the value of different options, providing one
basis for utility in decision theories (see Daw & Tobler, 2014). This is just to make the
obvious point that the failure of a young science to have studied a particular topic does
not mean that the topic will never be studied and that little can be learned about it. The
underlying brain processes of higher-level cognition are now an intense topic of study.
Fodor’s Law of the Nonexistence of Cognitive Science is a claim not only about neu-
ropsychology but about psychology as well. His claim that the more isotropic something
is the less anybody understands it and that very global processes aren’t understood at all
(p. 107) is not restricted to brain-behavior relations. Rather, I think we must take Fodor
as indicating that there won’t be coherent psychological explanations of complex pro-
cesses like making analogies or solving problems. Later on, he says bluntly that “the psy-
chology of thought has proved quite intractable” (p. 126), and in the Pr ecis (p. 33), he
affirms Forster’s observation that his book “is a programmatic sketch of the kinds of
things it would be worth having a theory about.” Higher-level cognition apparently does
not make the cut.
In understanding this strong language, I think it is important to note that the attitude it
expresses was not unique to Fodor, but in fact was a prevalent one in many circles in
6 of 24 G. L. Murphy / Cognitive Science 43 (2019)

experimental psychology for many years. As an undergraduate in the 1970s, I often heard
psychologists opining things like “Why on earth would anyone study discourse compre-
hension/language/concepts/autobiographical memory/cognitive development . . . when we
don’t even understand operant conditioning/verbal learning/psychophysics . . . yet?
There’s no way to scientifically study those things with all the variables you have to con-
trol.” People interested in psycholinguistics were told to first focus on tractable problems
like memory for word lists. Such comments are more likely to be made in private conver-
sations, reviews, or questions in talks than in print, but this attitude was no secret at the
time.
While I was on a job interview in 1990, a member of the search committee told me,
“Your work on categories is very interesting, but I take a scientific perspective on things”
The horror of the other members of the search committee hearing this was quite amusing,
but I was not surprised that an experimental psychologist felt that studying concepts and
knowledge is not true science, because I had heard it before. Indeed, I had even heard
from other researchers studying categories and concepts that my studies of knowledge-
based category learning (e.g., Kaplan & Murphy, 2000; Spalding & Murphy, 1996) were
not looking at “real concepts,” as opposed to studies that used categories of color patches
or geometric shapes. Much of the history of experimental psychology in the past 50 years
is a progression of subjecting increasingly complex behaviors to experimental analysis,
against the general belief that such topics will not yield to scientific study. But in 1983,
Fodor would have found many experimental psychologists who agreed with him, whether
or not they knew the concept of isotropy. In particular, psychologists studying one of the
input modules very likely already believed that higher-level cognition is too unstructured
and messy to understand.
Scientists choose topics to study that they are comfortable with. Many of my col-
leagues and friends would not feel comfortable dealing with the variability and complex-
ity involved in higher-level cognition. They work on psychophysics or animal learning. I
would not feel comfortable in dealing with more variable and complex topics like social
development or political psychology, where many of the critical variables cannot be
manipulated in the lab. Seen in this light, Fodor’s suggestion that higher-level cognition
will be little understood is not unique but rather reflects his personal taste and interest in
dealing with the kinds of variables that are involved in studying higher-level knowledge
and reasoning. Who knows what kind of nutty thing will end up explaining people’s solu-
tion to a problem? You might have to consider the totality of their knowledge, their
recent experiences, or even their personalities, God forbid. The problem of isotropy is
that nothing is ruled out as being relevant to the topic, and so you could be mired in a
swamp of potential variables, many of which have some influence on the process. How
can you control all this in the lab? Such concerns give nightmares to some, who then go
on to study topics like color constancy.
My suggestion, then, is that Fodor’s worries about the science of central processes
have their origins not in a careful analysis of the literature (except for Scientific Ameri-
can, perhaps), but instead in the same personal tendencies and preferences that influence
all our attitudes towards which topics within cognitive science (or science or scholarship
G. L. Murphy / Cognitive Science 43 (2019) 7 of 24

more generally) are worth studying and are likely to be rewarding for us. Fodor’s conclu-
sion matches that of many experimental psychologists in the 1970s and 1980s, though
few of them proposed their intuitions as laws. (In the Pr ecis, Kenneth Forster explicitly
endorsed Fodor’s conclusion that research on “thought and reasoning” “has not led to any
significant increase in understanding or even to a clear appreciation of the problems,” p.
9.) Indeed, Fodor’s negative attitude towards higher-level cognition continued throughout
his life. The title of his 1998 book, Concepts: Where Cognitive Science Went Wrong
expresses his conclusion succinctly. In a review of Fodor’s The Mind Doesn’t Work That
Way, Ray Jackendoff (2002, p. 164) began, “As has been his wont in recent years, Jerry
Fodor offers here a statement of deepest pessimism about the possibility of doing cogni-
tive science except in a very limited class of subdomains.”
Fodor’s claim thus seems to stem from his own intuition and preferences. Such intu-
itions should be evaluated critically. I think it is fair to point out that input systems and
more basic aspects of psychology have not immediately succumbed to scientific investiga-
tion. My own impression is that we do not have adequate psychological theories of object
recognition or speech perception, for example. Although it is not a module, classical con-
ditioning is a basic process that is found throughout the animal kingdom. In spite of
being studied for over 100 years, it has firmly resisted a complete explanation, as recent
papers on extinction and cue competition show (Dunsmoor, Niv, Daw, & Phelps, 2015;
Maes et al., 2016; Soto, 2018; Urcelay, 2017).
Fodor believed that the (perceived) failure to understand central processes in 1983
indicated that they would never be understood. One might be tempted in 2018 to make
an inductive argument that is the opposite of Fodor’s: Given that we have studied classi-
cal conditioning since 1901, the fact that we do not yet understand it means that we never
will. In contrast, higher-level cognition, which began to be seriously studied by cognitive
psychologists in the 1980s, might still be understood, given that we have fewer years of
failure to induce from. Or perhaps it would be wiser to eschew these kinds of arguments
entirely. All these processes are very complex, and they tend to follow the rule that the
more you find out about them, the more you discover that you didn’t know, even for sim-
ple processes like conditioning. That is both the joy and the frustration of science, and it
is as true of input modules as it is of higher-level cognition.
Fodor discusses some of the then-current attempts to study intelligent thought, mostly
from AI researchers (pp. 126ff). He points out that they have attempted to divide up
thought into specific processes or tasks, which can then be analyzed. For example, can a
robot navigate around a series of obstacles to get out of a room? Or can a program solve
logic problems? However, Fodor says that these approaches are bankrupt (footnote, p.
139), because they deny the most interesting aspect of thought, its isotropy (p. 127). To
put it bluntly (p. 127), “. . . if central processes have the sorts of properties that I have
ascribed to them, then they are bad candidates for scientific study.”
I think that there is much wrong with this argument, and I address its components one
by one. First, it is questionable that isotropy is “what is most interesting” about central
processes (p. 127). It is interesting to Fodor, because he sees it as a critical dividing line
between the input modules and central processes. However, for psychologists, there are
8 of 24 G. L. Murphy / Cognitive Science 43 (2019)

many interesting phenomena of higher-level cognition. If we managed to account for


them without fully explaining their isotropy, I would not be heartbroken.
It is also important to note that isotropy fails in many documented cases of higher-
level cognition in which people don’t use knowledge that they have or focus on immedi-
ately available information instead of more relevant principles. Pr ecis commentators
Clark Glymour, Jerome Kagan, and John Morton also noted that isotropy is less common
than Fodor seems to think—and that it would be unhelpful in many cases. His response
was roughly that such failures of isotropy are performance errors and so are not the cen-
tral data to be accounted for. But 35 years of research has shown, I believe, that isotropic
failures are more typical of human thought than are the successes. For example, effects
of representativeness and salience in decision making are ones in which people rely on
information associated to the input rather than using broad principles of reasoning or dull,
factual information that they also know (Kahneman, 2011). In problem solving, people
will repeat the same solutions to a series of problems even when a simpler novel solution
would do better for the later problems (the set effect; see Woodworth & Shlosberg, 1954,
p. 836–7). Those who did not do the earlier problems easily discover the simpler solution.
People also do not notice that objects can be used in novel ways to solve a problem (as
in functional fixedness; Duncker, 1945). The familiar use prevents retrieval of object
properties that would help solve the problem. After reading a solution to one problem,
people will not apply a formally identical solution in another domain, as in Gick and
Holyoak’s (1980) classic demonstration, replicated many times since. Although isotropy
would suggest that the solution to the radiology problem (which they just read a few
moments ago) would then be used to solve the fortress problem, the majority of even
highly selected university students fail to do so. Indeed, one of the truisms of learning
and instruction is that things learned in one context usually don’t transfer to other con-
texts that are formally identical (e.g., Ross, 1984).
Isotropy is great when it works, such as in scientific analogies, but to suggest that it is
an essential characteristic of higher-level cognition is simply incorrect. Even Fodor’s
prime example of analogy reflects such limitations. People find it difficult to construct
analogies across remote domains (Gick & Holyoak, 1980). No doubt successful scientists
have done so (Fodor’s example), but an analogy that generates a new theory or leads to a
major discovery is an accomplishment for which a professional gets promoted—not an
everyday behavior of ordinary people. Indeed, failures of imagination and failures to draw
interesting links are a common fact of life. I invite the reader to consult internet comment
sections on articles to see how many interesting analogies are present. I find that many
political acts are compared to those of Hitler or Stalin, and many jazz reviews compare
musicians to another musician who plays the same instrument. One seldom reads that a
proposed law reminds someone of snails or a symphony, something from a completely
different domain. Such comments sometimes occur, but not often enough to make them
the primary fact that must be accounted for in higher-level cognition.4 Isotropy is a possi-
bility—we cannot rule out the possibility that something I know about the state budget
will influence my conclusion about whether that dark shadow on the road is a turkey. But
it is by no means guaranteed that we will flexibly access remote areas of knowledge that
G. L. Murphy / Cognitive Science 43 (2019) 9 of 24

might be relevant, or even general principles that should be followed in this specific case.
Far too often we don’t.
However, even if Fodor were right that isotropy is the critical feature of higher-level
cognition, it doesn’t follow that dividing up the topic into bite-sized scientific problems
would be a bad way to study it. Fields like ecology must deal with the interconnectedness
of all the components that make it up. Animals and plants rely on one another for habitat,
food, pollination, and so on. Changes in climate affect plants, which affect insects and
foraging animals, which affect their predators, which affect. . .. There are massive connec-
tions between all the parts of an ecosystem, and each one can affect multiple other com-
ponents. However, ecologists manage to study systems by looking at sensible sub-topics.
They don’t say, “This is a bad candidate for scientific study” and give up. By understand-
ing the components, one can hope to piece them together to get an understanding of the
entire system. Of course, there are likely to be emergent properties that are not seen when
studying one or two components. But when the specific topics are understood to some
degree, their interactions can then be examined. After divide-and-conquer can come an
attempt at unification.
It is unclear why Fodor doesn’t think such an approach is possible for central sys-
tems. If we understand memory retrieval, similarity, and concepts, say, can we not
then try to understand when people do and do not find analogies? If we don’t know
that similarity is often based on uninteresting “superficial” features, we won’t be able
to explain why analogies are often not found (contrary to isotropy). If we don’t
understand people’s comprehension of relations, then analogies will remain obscure.
But once we get a better grip on all these things, we can start to piece together a
story, which might in turn reflect back on the components and tell us something new
about them (cf. Holyoak & Koh, 1987).
Furthermore, I wonder why Fodor was not worried about isotropy in other domains
besides the human mind. That is, findings about one topic can have implications,
sometimes critical ones, for topics that are not directly linked and that are studied by
different people with different methods. Discoveries in some parts of physics have
implications for other topics in physics or in very remote applications. For example,
the popular press has widely reported on how discoveries about atomic structure may
have the result of every computer code being crackable, through the power of quan-
tum computing (e.g., Swabey, 2015). Discoveries in quantum physics have implica-
tions for, say, cosmology, and vice versa. How is such a topic suitable for scientific
study, then? How can scientists possibly figure out all the myriad connections between
subatomic structure and everything else, from the smallest chemical reaction to the
properties of the largest objects in the universe? If we make a novel discovery about
genetics, it will surely have ramifications throughout all of biology and perhaps psy-
chology. Another bad topic for scientific study?
If Fodor’s claim is right, we cannot possibly understand such systems of interactions,
and if we study one little part of them, we will be ignoring their most essential feature,
the fact that every bit of them is so interconnected to the other bits. I cannot prove that
in fact human science will be able to figure out these complex systems. Perhaps we will
10 of 24 G. L. Murphy / Cognitive Science 43 (2019)

never reach the end of trying to do so. However, before I give up on higher-level cogni-
tion because it is isotropic, I would first like to see my colleagues in the Physics and
Biology departments close down their labs as being hopeless. After all, they have had a
lot longer to try to figure out their isotropic systems than cognitive psychologists have
had, and we should have our turn.
Fodor notes (p. 128) that there need to be “joints” that one can cut nature at in order
to understand it. He says that this is just as true for physics as psychology. The question
is why he thinks physics has such joints but central systems lack them. His apparent
understanding of the neurology underlying central systems is that everything must be con-
nected to everything (functionally) in order that all the different modules can send inputs
to the central system, which must also contain your semantic memory, and so on. That’s
just a big, undifferentiated mass of stuff, which can’t be readily analyzed. Of course, this
is just his supposition. There are in fact many attempts to localize different kinds of
knowledge and to establish their organization in the brain, with some degree of success
(e.g., Lewis, Poeppel, & Murphy, 2015; Patterson et al., 2007; a special issue of Cogni-
tive Neuropsychology is devoted to knowledge representation, vol. 33, issue 3–4). Fur-
thermore, even at the time of Modularity, there was a well–known tradition of semantic
memory research with models of how knowledge is organized (e.g., Collins & Quillian,
1969; Rosch, 1973; Smith, Rips, & Shoben, 1974). Simply because there is connectivity
of different input sources does not mean that the system is disorganized and has no oper-
ating principles.

1.1. Conclusion

Fodor’s argument for his First Law is not compelling. He did not provide empirical
evidence for his claim. The primary basis for his conclusion is the assumption that isotro-
pic domains are not amenable to scientific explanation. However, higher-level cognition
is not as isotropic as he seemed to assume, and it is simply not clear that domains with
wide-ranging connections are in fact resistant to scientific explanation. He did not demon-
strate that a divide-and-conquer strategy could not help to elucidate the complexities of
even isotropic domains, as seems to happen in other sciences. Thus, Fodor’s claim
appears to be based on his personal intuition, and he does not present an argument that
would be compelling to those who did not share it.
Of course, arguments about what will or won’t be successful are eventually replaced
by empirical results of success or failure. Since publication of Modularity, we have had
sufficient time to see whether there has been meaningful progress in understanding central
processes, so his prediction can be tested empirically. Of course, positive proof for his
pessimism will be hard to come by, as it is essentially claiming the null hypothesis
(“there will be little or no progress. . .”). But if there has been no progress, this would
give credence to his claim. On the other side, disconfirming evidence is a possibility, as
significant progress will cast doubt on Fodor’s Law.
G. L. Murphy / Cognitive Science 43 (2019) 11 of 24

2. Was Fodor’s prediction correct?

Fodor’s claim that central systems are a bad candidate for scientific study serves as a
prediction. There have been 35 more years of scientific research on central processes
since 1983. (Indeed, I would say that there has been more research on higher-level cogni-
tion in those 35 years than in all the time prior.) If Fodor was right, then that effort must
have been a sequence of futile exercises. Not only should we not understand central pro-
cesses, but there should have been little meaningful progress. (A complete theory of cen-
tral processes is not to be expected in this time, given that a complete theory of many
input systems does not yet exist. The question is whether there has been meaningful sci-
entific progress.) Of course, evaluating scientific progress is not always easy. In the
Precis (p. 14), Sam Glucksberg pointed out that “human memory, for example, is now
far better understood than it was in Ebbinghaus’s day,” even though memory is not a
module. Fodor surprisingly replied, “I doubt that this is so. . . we aren’t within hailing dis-
tance” of an explanation of memory’s constructive, isotropic nature (p. 36). Ultimately,
readers will have to make up their own minds as to whether there has been progress since
1983. . . or since Ebbinghaus’s time.
The reader will be relieved to know that I do not intend to review the literature of cen-
tral systems in general. Nor in specific. However, I do wish to touch on a few topics of
higher-level cognition, starting with the one that has probably received more attention
than any other: decision making. In spite of Fodor’s advice, hundreds of investigators
over the past 35 years have devoted themselves to understanding how people make deci-
sions across a wide domain, ranging from life-and-death medical choices (e.g., Betsch,
B€ohm, & Chapman, 2015), financial decisions (e.g., Knoll, Appelt, Johnson, & Westfall,
2015), and mate selection (e.g., Miller & Todd, 1998) to experimental paradigms of gam-
bling and probabilistic reward learning (Daw, Gershman, Seymour, Dayan, & Dolan,
2011; Hertwig, Barron, Weber, & Erev, 2004; Steyvers, Lee, & Wagenmakers, 2009).
Have they learned anything?
Lacking a current issue of Scientific American, I have consulted some recent textbooks
on decision making. Such things exist, and they are not short. It seems that much has
been learned about this topic in spite of its unsuitability for scientific research. Indeed, I
have taught an introductory course on decision making and can assert that there are far
more topics and findings than can be covered in a semester. (I’ll discuss in the next sec-
tion whether the amount of published material really reflects progress.) For example,
Jonathan Baron’s Thinking and Deciding (2008) has entered its 4th edition and is 525
pages long (not counting the references and index). It covers topics such as logic, proba-
bility theory and the psychological perception of probability, heuristics and biases,
hypothesis testing, judgments of correlation, and myside bias. It discusses normative theo-
ries of decisions and whether people conform to them. Choice, utility, and quantitative
judgments are addressed. The book has a chapter on moral judgments and one on fairness
and justice, but these are topics that have spawned an entire field since the book’s publi-
cation, with dozens of articles appearing annually. Perception of risk and judgments about
12 of 24 G. L. Murphy / Cognitive Science 43 (2019)

the future (including temporal discounting) receive attention. I think that it is fair to say
that one could write an advanced textbook about any one of these topics and fill it up
with findings and theories. It would be easy to teach a semester-long seminar on temporal
discounting or nudges or Prospect Theory—for example, see a 50-page long review of
temporal discounting by Frederick, Loewenstein, and O’Donoghue (2002). It does not
seem that isotropy has sapped the ability of scientists to learn much about temporal dis-
counting, say, such that you never know what choices people will make, and there are
not enough reliable generalizations to make a scientific study of the topic.
I also happen to have two handbooks of reasoning in my office. One, The Cambridge
Handbook of Thinking and Reasoning (Holyoak & Morrison, 2005), has 32 chapters and
is almost 800 pages long. The other, a collection edited by Adler and Rips (2008) has a
mix of classic papers and new chapters—53 in total. It is over 1,000 pages long. (It
includes a selection from Modularity.) It seems that something has been learned about
reasoning and thinking since Scientific American’s failure to publish anything on the sub-
ject. The rise of research on causal reasoning and the battle over dual systems theory
have both engendered rich research programs on reasoning that did not exist at the time
Modularity was written.
Finally, we may consider the topic of analogy, which Fodor specifically picked out as
being unexplained and inexplicable (p. 107). Here it is useful to examine the book The
Analogical Mind, edited by Gentner, Holyoak, and Kokinov (2001). It is also not short—
over 500 pages—suggesting that there is in fact something to know about the psychology
of analogy. The editors’ introduction contains a brief history of the study of analogy in
cognitive science. It is true that almost all of the progress it cites was made after Modu-
larity was published. However, this just illustrates the danger of trying to make a predic-
tion about future progress based on current knowledge, especially for a little-studied
topic. Much has been learned about analogy, both with regard to people’s successes and
failures in detecting and producing them. One interesting aspect of this topic is the large
number of computational models developed to make or evaluate analogies, including the
Structure Mapping Engine (Falkenhainer, Forbus, & Gentner, 1989), ACME and ARCS
(Holyoak & Thagard, 1989; Thagard, Holyoak, Nelson, & Gochfeld, 1990), and LISA
(Hummel & Holyoak, 1997). The book presents a number of newer models that build
upon these earlier ones. As a general rule, it is impossible to make computational models
when a topic is not at all understood. Such models cannot be based on vague intuitions
or informal accounts.
Kevin Dunbar’s (2001) chapter on scientific analogies is interesting given the role of
scientific analogy in Fodor’s argument. Based on lab meetings, interviews, and written
reports, Dunbar argues that analogies are in fact constructed for purposes of theorizing
and explanation, as Fodor claimed. However, in the ordinary course of events, most
analogies are extremely local, “either from highly similar domains (e.g., HIV to HIV), or
domains from common superordinate categories (e.g., one type of virus to another type
of virus)” (p. 316). Theories of analogy can explain both of these things. Local analogies
can be retrieved by the similarity or identity of terms, and experiments have shown this
to be an extremely effective tool for analogy retrieval. It is not surprising that such
G. L. Murphy / Cognitive Science 43 (2019) 13 of 24

analogies are common, given principles of memory retrieval. More distant analogies can
be discovered by searching for common relations (e.g., prevents infection). That is more
difficult, as shown by the classic Holyoak and Gick findings, which is why analogy in
breakthrough scientific discovery is a true accomplishment. Dunbar discusses some of the
conditions necessary for that kind of analogy to be successful, such as deep encoding of
the structural relations. Those conditions are more likely to be fulfilled by experts in a
domain than by subjects tested in a single experimental session.
In short, not only do we understand a perhaps surprising amount about analogy, a topic
that Fodor argued, “nobody knows anything about how it works; not even in the dim in-
a-glass-darkly sort of way” (p. 107), but progress has been made on Fodor’s prime exam-
ple of analogy in science.
The Cognitive Science Society has established the Rumelhart Prize as a lifetime award
for researchers “making a significant contemporary contribution to the theoretical founda-
tions of human cognition” (from the Society’s website). Have any of the recipients
worked on central systems? Yes, quite a few have, including (in order of recency) Miche-
lene Chi, Michael Tanenhaus, Dedre Gentner, Ray Jackendoff, James McClelland (in his
models of semantic memory), and Susan Carey. Judea Pearl is a computer scientist, but
the causal Bayesian networks he developed have been used as models of causal reasoning
in many studies of higher-level cognition. In fact, although David Rumelhart is now best
known for his work in connectionism, prior to that he did important work on understand-
ing, knowledge representation, schemata, story grammars, and learning by analogy—all
topics of higher-level cognition (Rumelhart 1980, 1981; Rumelhart & Norman, 1981).

2.1. Real progress?

Perhaps one might take a more skeptical view of the evidence I have cited of scientific
activity. Yes, there have been many experiments; many articles have been published;
there are many replicable phenomena; and many theories and models have been pro-
posed. But does this mean that there has been progress? And is this progress truly scien-
tific? One might argue that there is merely a collection of facts and results without a true
scientific understanding of decision making, reasoning, etc. (Fodor’s response to Glucks-
berg noted above seems to fit this form of reasoning, as surely a vast amount has been
learned about human memory.) Barry Schwartz wonders in his Pr ecis response (p. 31)
whether we are “suffering, all of us, from an illusion of progress in understanding the
way in which central systems fix belief and determine action,” because we don’t test our
theories under realistic situations. I understand this argument, but it inevitably descends
into a tendentious discussion of what exactly is “scientific” understanding or “real pro-
gress.” According to this perspective, it isn’t enough that something be a topic that one
can study experimentally and with replicable data, but it must reach a standard of success
—a standard that is not defined. One is likely to refer to physics or chemistry to say,
“Here is a real science, where you know exactly what is going on,” and then to contrast
that with some other domain where things are messier and probabilistic, as well as not
fully understood. In my experience, the tendency to suggest that a given discipline “really
14 of 24 G. L. Murphy / Cognitive Science 43 (2019)

knows what is going on” is inversely related to familiarity with that discipline, but let’s
accept the premise for the moment.
A question is whether this argument might not apply also to the psychology of input
systems. For example, one cannot specify simple input-output relations for how speech
signals are understood; it is also in important ways “messy.” This was one of the first dis-
coveries of the study of speech (see Pardo & Remez, 2006). Because of coarticulation,
formants of /d/ are very different in the context of the vowels /i/ and /u/ (“deep” vs.
“do”). English phonemes that are aspirated in one context are not aspirated in others but
are encoded as the same phoneme. A given sound appears to be one phoneme when it is
embedded in fast speech but a different one when it occurs in slow speech (Miller &
Volaitis, 1989).
Speaker differences also cause a huge problem. Young children make one set of
speech sounds and huge men make another, almost nonoverlapping set. The interpretation
of a sound changes depending on the speaker’s vocal qualities. Dialects are notoriously
difficult for automatic speech recognition. They have implications for the way that sounds
are pronounced (the Boston a, the British r), for the vocabulary itself, and for differing
syntactic and semantic constructions. Native speakers can understand foreign accents (to
a large degree) even if the accents systematically mispronounce sounds relative to their
own use. Furthermore, experience shows that there is often a progression in which foreign
speakers are difficult to understand at first but seem to become easier throughout the con-
versation (for experimental evidence, see Bradlow & Bent, 2008). Listeners can adapt to
the accents, to the speed of speech, and to the vocal characteristics of the speaker over
the course of the conversation. This does not sound like a simple input-output system.
Rather than simple laws like, “such-and-such a sound is interpreted as /t/,” there will be
many different factors that relate interpretation to the speed of speech, to the accent, to
the preceding and subsequent sounds, to lexical knowledge, to vocal qualities of the
speaker, to concurrent visual input (the McGurk effect), and so on. The interpretation
rules may even change over the course of a conversation.
Indeed, even the identification of something as speech can depend on context and
instruction. Speech researchers have transformed speech signals into components made of
sine waves (readers who have not heard this can find videos of it on the Internet). Ini-
tially, such a stimulus might sound like whooping or whistles and is not interpreted as
speech at all. But, as Pardo and Remez (2006) point out, when a listener is told that it is
speech, he or she may be immediately able to correctly transcribe it, without training. It
seems that a high-level belief that the signal is speech can somehow activate the module
and result in a completely new interpretation of the stimulus.
As researchers investigate all these factors, should we question whether the psychology
of speech perception has made true, scientific progress? It sounds like it is a big mess,
with a zillion variables to worry about. Yes, there have been findings about accents and
coarticulation and syllables and speed of speech, but in the end, do all those findings
stick together as a coherent theory?
If that is the objection, it’s not obvious why it is an objection. If it turns out that
speech perception doesn’t cohere, then that is the answer to the scientific question of how
G. L. Murphy / Cognitive Science 43 (2019) 15 of 24

people understand speech. Science cannot promise a beautiful answer. Some domains
may end up being simple, coherent systems with principles that apply without much
exception throughout their domain. Others may not. As noted above, my own suspicion is
that the perception of simple, elegant systems is something perceived by non-experts
from outside the domain. As one gets into the weeds, the simple laws break down as
more and more exceptions must be accounted for.5 And as one gets more and more data,
one becomes aware of novel phenomena and issues that are yet unaccounted for.
Although there may not be a complete theory of higher-level cognition (though see the
next section), there are definitely themes that can be found in multiple areas (e.g., heuris-
tics such as resemblance and salience that influence processing, the graded structure of
categories in multiple domains, the failure to consider broader contexts). There has
clearly been progress in understanding central processes. The psychology of reasoning, in
particular, has developed from a frankly dull study of syllogisms and similar logical prob-
lems to an interesting area that encompasses more issues and has substantive theoretical
proposals (e.g., of causal reasoning or dual system approaches mentioned earlier). Induc-
tion is now studied more than deduction and has proven to be a very rich area (e.g., Fee-
ney & Heit, 2007). Recent work on explanation has opened up this important topic to . . .
uhhh, explanation (Lombrozo, 2006; Williams & Lombrozo, 2010). Scientists tend to vote
with their feet. The great increase in studies of these topics suggests that something is
being learned about them.
To be clear, Fodor does not express a belief about “real science” in Modularity. This
is just a possible interpretation of why he thinks that central systems are a bad candidate
for science. I don’t believe that he thought that nothing at all could be learned about cen-
tral systems, as some things had already been learned by the time of his writing (e.g.,
about semantic memory, similarity, and categorization, some of which are cited in his
own work). My guess is that he thought that things could be learned but that they would
not come together in a coherent or interesting way. If that was his—or anyone’s—predic-
tion about the study of higher-level cognition, it needs to be fleshed out more fully, pro-
viding criteria for what would be real (or interesting) science. The argument would then
need to say why complex topics like object recognition or sentence parsing would qualify
but central systems would not. Clearly, the progress in many areas of higher-level cogni-
tion is prima facie evidence against Fodor’s Law. If they don’t constitute true scientific
domains, a Fodor defender would have to go through one or more of them and explain
how the large amount of work has not resulted in “true” scientific understanding.

3. Architectures of cognition

One potential answer to Fodor’s skepticism may be found in large-scale theories of


cognition that attempt to provide a unified architecture within which specific behaviors
will be explained, such as SOAR and ACT-R (Anderson, 1976; Newell, 1990). John
Anderson’s ACT-R theory provides a structure of basic processes and memory compo-
nents that is intended to be constant across the theory’s explanations of different
16 of 24 G. L. Murphy / Cognitive Science 43 (2019)

phenomena, including higher-level cognition (see Anderson, 2007, ch. 1 for a review).
Interestingly, ACT-R has a modular architecture, and Anderson (2007, ch. 2) discusses
whether the modules fit Fodor’s definition. If these components of the theory are truly
modules, this would undermine Fodor’s argument that central systems are not modular.
Although Anderson makes the argument that his modules have many of the properties of
Fodor’s, it seems fairly clear that most of them are not what Fodor had in mind. ACT-
R’s modules of declarative memory, goal representation, “imaginal” processing (a sort of
working memory), and the procedural module are general-purpose processors that do very
different things in different tasks. Fodor’s modules are input modules, which wait for a
specific kind of stimulus and then carry out some processing on it. The very fact that all
of ACT-R’s modules are involved in tasks as different as solving equations, learning
verbs, driving, and attentional tasks shows that they are not specific processors of this
sort. I would say that Anderson’s components are modular in the programming sense that
they each take inputs and produce outputs without other modules influencing their inter-
nal computations. But the jobs they do are very diverse and change from task to task,
unlike Fodor’s examples of speech or spatial perception.
Nonetheless, does the existence of a system like ACT-R, which has been applied to
many different tasks serve as a test of Fodor’s theory? The topics that have been tackled
by ACT-R are listed on the system’s website (see act-r.psy.cmu.edu/publication/). The list
is extensive, running from lower-level topics of attention and memory to language learn-
ing and parsing, problemsolving, reasoning, and decision making, some of which are
undeniably higher-level cognition. Anderson (2007, p. 87) specifically rejects Fodor’s pes-
simism about being able to explain such processes, stating that “Fodor’s worries seem not
to have been realized in a documented instance of human cognition” (p. 59).
ACT-R’s successes are indeed impressive, but not all of them serve as counterargu-
ments to Fodor’s Law. Many of them are based on limited domains where isotropy
clearly does not apply. For example, although the Tower of Hanoi is a classic problem-
solving task, this is not the kind of problem solving that Fodor thought would stymie sci-
entific process. Similarly, learning arithmetic or to solve simple algebraic equations does
not require knowledge outside these domains, unlike Fodor’s example of analogical rea-
soning or solving the frame problem. There is nothing wrong with that, and a theory of
learning algebra is an important contribution to psychology, even if it doesn’t help to
answer Fodor. One thing that ACT-R’s models suggest is that there are many important
processes that don’t seem to fall within either Fodor’s input modules or central systems.
Specific cognitive skills that require significant learning are probably not input modules
(e.g., solving equations, learning experimental design), but neither are they wide-open
central processes that draw on everything that we know. Thus, important accomplish-
ments of higher-level cognition may not be subject to Fodor’s Law, given that they oper-
ate within limited domains.
Anderson’s declarative memory module and other theories of memory potentially have
the information to handle isotropy. That is, my memory has information about turkeys,
past episodes in my life, knowledge of state hunting laws, and so on, so that it is possible
for a theory of memory to explain why seeing a dark figure on the road could make me
G. L. Murphy / Cognitive Science 43 (2019) 17 of 24

think about how the beginning of hunting season might explain why a turkey would be
cruising suburban streets on this particular evening. Priming, spreading activation, and
directed search may ultimately account for what knowledge is activated in problem solv-
ing, identification, and other higher-level tasks. However, the list of ACT-R successes
does not yet contain examples of such tasks.
In the end, my impression is that these large-scale architectures have addressed some,
but not all of Fodor’s problem. On the positive side, they have shown that many tasks
can be modeled without concern about uncontrollable isotropy. Cognitive skills like doing
algebra fit neither Fodor’s definition of input modules nor the image of central processes
fraught with isotropy, yet they are significant cognitive accomplishments that can be
modeled by ACT-R (and other systems). But if one agrees that the Fodorian examples of
isotropic reasoning are something that needs to be accounted for, then it isn’t clear that
these architectures have accomplished this. However, as I have argued, this doesn’t
license the conclusion that the problem will never be addressed.

4. The way in which Fodor was right

Having criticized Fodor’s First Law steadily, let me now admit that to some degree he
is right. Higher-level cognition is a mare’s nest. Let me illustrate with a slight digression
into autobiography. Some time ago I wrote a long review of the psychology of concepts,
a central part of higher-level cognition (Murphy, 2002). In the least-cited part of the book
(p. 492), I concluded, “In short, concepts are a mess.” Why? Well, sometimes prototype
theories seem obviously to be correct; sometimes exemplar theories have a lot of evi-
dence. Experiments with slight differences can support completely different theories.
Concepts are closely related to word meanings, but word meanings are sometimes crazy
things that obviously don’t pick out a single or coherent set of concepts. Basic-level cate-
gories are always the easiest to deal with . . . except when they aren’t (e.g., for experts).
Our knowledge of the world infuses many of our concepts, constraining how we learn
and use them, but many successful theories of concept learning completely ignore this.
People from different cultures may follow different rules in their preferred concepts. And
don’t get me started on thematic, relational, or script-based concepts!
When I started doing research on category learning as a graduate student, I thought
that I would be exploring the universal properties of human learning and representation
that would apply across a broad swath of domains. That is, I would run experiments with
artificial materials and thereby eventually understand how people form concepts in gen-
eral. My results would apply to how people learned about animals, social categories,
events, political concepts, and so on. After some initial successes, that dream has been
dashed. However, this has not been too discouraging, because it falls into the category
mentioned above of “the more you learn about something, the more you discover how
complicated it is.” Sometimes the complexity has been subsumed by more powerful theo-
ries that can explain multiple phenomena. For example, the seemingly never-ending battle
between prototype and exemplar theories has been resolved (in my opinion) by models
18 of 24 G. L. Murphy / Cognitive Science 43 (2019)

like SUSTAIN, which make prototype-y representations when it is possible to do so but


learn exceptions (exemplars) when it is not (Love, Medin, & Gureckis, 2004).
In other cases, one simply has to live with complexity. For example, I think it is unde-
niable that concepts underlie word meanings (Murphy, 2002, ch. 11). But there are prob-
lems, such as the historical accumulation of different referents across the history of a
word, which can result in a number of distinct entities all called by the same name (Malt,
Sloman, Gennari, Shi, & Wang, 1999). Those things don’t obviously form a category and
may not be perceived as similar. Furthermore, most common words are polysemous, hav-
ing multiple related meanings (senses), such as table meaning a kind of furniture, a panel
of a door or jewelry, a geographic element (mesa), an arrangement of numbers (“see
Table 1”), or the two sides of a backgammon board—not to mention the verb senses.
There is no single concept that underlies all those senses. Indeed, the legislative use of
the verb table (“After discussion, the bill was tabled.”) means opposite things in British
and American English (to consider vs. to put aside), making it difficult for people who
know both senses to represent the verb meaning. There is likely a coherent explanation
of the process by which this complexity arose (e.g., Murphy, 2007; Ramiro, Srinivasan,
Malt, & Xu, 2018), but there is no way to get rid of the unpleasant fact that words can
have many different senses, and within those senses, there may be irregularity of its refer-
ents. Thus, if Fodor meant that central systems are likely to have such messy facts, he
was right. However, one cannot deny that most people understand language very quickly
and often accurately in spite of the mess, so there is some psychology to be done to
explain that.
A further complexity of higher-level cognition is individual variation, another fact that
no one told me about in graduate school. The unwritten assumption about variation was
that this was uninteresting experimental error, probably due to people not paying attention
or not following the instructions (both of which of course do happen). Although people’s
visual systems are usually quite similar, their ways of learning concepts or writing essays
or making decisions can vary a lot. Within my field, the psychology of concepts, this
became apparent in the many studies of the classic debate between prototype and exem-
plar models, when it was eventually realized that group results did not necessarily repre-
sent the individuals making up the group. When models were fit to individual subjects, it
was often found that some people were fit by prototype models, some by exemplar mod-
els, and some by no model (e.g., Malt, 1989; Smith, Murray, & Minda, 1997; see also a
different comparison by Ashby & Vucovich, 2016). Variation is found in other concep-
tual behaviors. When US college students were given the choice of forming thematic or
taxonomic categories, they made consistent choices but did not agree with one another.
That is, depending on the experiment, from 50% to 66% of subjects consistently chose
one, with most of the rest consistently choosing the other (Lin & Murphy, 2001). In deci-
sion making, people differ in their approaches to relatively simple “bandit” problems
(Steyvers et al., 2009). Even in language comprehension (presumably an input module),
it turns out that some people correctly understand the stimuli and others arrive at a differ-
ent, simpler interpretation (Ferreira, 2003).
G. L. Murphy / Cognitive Science 43 (2019) 19 of 24

For some time, I found this situation depressing and perhaps began to feel sympathy
for Fodor’s Law. After all, our job is to explain the data, but if the data vary from person
to person, then we have a mish-mash to explain, and that is not going to be easy, or fun.
However, this kind of variation is in reality not ultimately that different from variations
of domain or context that I was already reconciled to. Individual differences are often
understandable as variations in the processes that have already been identified in the ini-
tial analysis. (If not, then they themselves suggest processes that were not previously
known, which is a good thing.) For example, people’s sentence parsing errors might be
due to working memory constraints along with analyses of which constructions require
more computation to complete. If so, although they are unfortunately messy, individual
differences would confirm or inform part of the psychological analysis.
A well-known critique of psychological research is the fact that its subjects are often
WEIRD: Western, educated, industrialized, rich, and from democracies (Henrich, Heine,
& Norenzayan, 2010). This does not describe the majority of people in the world. This
critique has been applied to studies of higher-level cognition (though it doesn’t seem as
relevant for, say, visual psychophysics or associative learning). Perhaps that is another
reason that this is a bad candidate for science—results may be specific to a given culture
or subpopulation within a culture and therefore not universal truths about psychology.
If different subject populations do provide different results from those of the ever-pop-
ular US college sophomore, that is too bad. Our lives would be easier if such variations
did not exist. But presumably there will be an explanation based on education, culture,
etc. Indeed, the very reference to WEIRD people suggests that the variables in the acro-
nym are partly responsible for people’s behavior. So, if dairy farmers in eastern Congo
form categories or make decisions differently from my American college students, this
will be because of a reliable connection between their culture or language or economic
system and the underlying processes. In decision making, some financial decisions differ
based on the wealth or poverty of the decision makers (Mani, Mullainathan, Shafir, &
Zhao, 2013; Shah, Mullainathan, & Shafir, 2012), but these differences generally make
sense given their economic lives. The success of such explanations is another way of con-
firming the general framework used to explain those decisions, even if it requires greater
complexity—more variables in the theory. If our theory can’t encompass the new group,
then it probably does not completely represent the processes used by our sample of con-
venience either.
In short, I think that Fodor’s suspicions about higher-level cognition had some validity.
Understanding thought is not going to be easy, and the answer is going to be messy. Each
topic is going to depend on a large number of variables rather than having a very simple,
elegant explanation. Where he fell short was in the assumption that good science could
not be done in such a domain. Engineers and physicists have to work in complex
domains with many variables such as trying to understand the aeronautics of strangely
shaped, constantly varying objects like flying bats. The numbers of variables involved in
meteorology or ecology seem enormous. Yet scientists in such domains can make reliable
predictions. In a field like concept learning, we know of many different types of concepts,
various strategies people use to try to learn them, variables that influence what is learned
20 of 24 G. L. Murphy / Cognitive Science 43 (2019)

and how that information is later used, and changes in some of these tendencies with
development, knowledge, and context. Just as human thought and behavior are complex,
our scientific explanation of them is going to be complex as well. If that makes it a bad
candidate for science, then so be it. However, I am not aware of a good candidate that
would avoid those problems.

5. Conclusion

Jerry Fodor had a habit of making outrageous statements in part (apparently) to pro-
voke strong reactions. For example, his famous claim that all words, including ones like
xylophone or electron, must be innate (Fodor, 1981, p. 279) was presumably intended to
show up weak theories of learning that could not explain how infants and children
acquire so many words with often minimal input. Did he really believe that? He didn’t
(so far as I know) ever deny it, but it seems difficult to credit. Similarly, his First Law
strikes me as being a kind of tendentious argument designed to provoke thought and
argument. One might in fact argue that I have taken it far too seriously here, that it was
meant as a kind of joke. However, as Jackendoff (2002) pointed out, Fodor had a long-
running streak of pessimism about our ability to explain cognition, and his Law is com-
pletely in keeping with that. (Also, his discussion goes on a very long time for a joke.) If
his suggestion was not entirely serious but was meant to spur conversation, then it is pre-
sumably appropriate to review and evaluate the claim as I’ve done here. However, if that
was its intent, I’m not sure that the Law achieved its goal of spurring discussion. People
who work in higher-level cognition very seldom refer to it, nor has it had an obvious
effect in reducing work on problem solving, concepts, decision making, similarity, and so
on. Indeed, research on some of these topics, like decision making, has grown far more
than has research on input modules.
One reason for its seeming lack of influence is simply that Fodor gave no strong argu-
ment for his claim. In fact, in working on this article, I was so struck with the weakness
of the arguments that I then went back and reread the earlier sections of Modularity to
see if they were similarly weak (see also Sternberg’s Pr ecis commentary). I was relieved
to find that they weren’t: There is close argumentation, reference to empirical phenom-
ena, answers to potential objections, and so on. That is not the case in his section on cen-
tral processes.
Why then did he feel so strongly about them? I have suggested that his conclusion
was due to then prevalent attitudes towards higher-level cognition, which were partly
based on the tradition in experimental psychology to study extremely simplified problems
(learning lists of nonsense syllables, forming concepts of simple logical connectives, per-
ception of pure colors presented in a Maxwellian view, operant conditioning in Skinner
boxes, and so on). Given this history, questions such as how to explain scientific infer-
ence or conceptual development must have seemed totally unanswerable. Researchers in
1983 investigating reasoning or discourse comprehension had probably heard similar
views for years before Fodor’s book was published. His voice was just one more, and it
G. L. Murphy / Cognitive Science 43 (2019) 21 of 24

didn’t provide a strong argument against the possibility of future progress, so it didn’t
serve to discourage the field. In retrospect that is a good thing, as the subsequent 35 years
have yielded important discoveries that would never have been found if the Law had con-
vinced scientists to put their efforts elsewhere.

ACKNOWLEDGMENTS

I would like to thank Frank Keil and the reviewers and editor for helpful comments on
the manuscript.

Notes

1. Citation count services are not very good at counting citations to books. Google
Scholar provides obviously wrong results for the book (38 citations). However, the
precis of the book published in the Behavioral and Brain Sciences is listed as hav-
ing 16,101 citations. That is a lot (and I suspect it actually includes citations to the
book).
2. Fodor mentions an additional property of belief fixation, proposing that it is Qui-
nean. This is different from isotropy in a rather subtle respect, and my discussion
may blur them together. Those interested in the details should consult Modularity.
3. Indeed, this seems so silly that one might read it as simply being a joke. However,
there is no further review of the literature to be the “serious” evidence. A reviewer
suggested that Fodor sometimes used humor to deflect criticism or shortcomings of
his arguments. That could be the case here.
4. Of course, many analogies are just cliches. It is not uncommon to come across
analogies like driving without a roadmap or chickens without heads or herding cats.
These are not actually created analogies—their meanings are stored in long-term
memory and retrieved as needed.
5. A tweet (unfortunately lost in the mists of Twitter) captured this from the perspec-
tive of a scientist’s career: As a graduate student, “I know this stuff!” As an associ-
ate professor, “Do I know anything?” As a full professor, “It’s complicated.”

References

Adler, J. E., & Rips, L. J. (Eds.) (2008). Reasoning: Studies of human inference and its foundations.
Cambridge, England: Cambridge University Press.
Anderson, J. R. (1976). Language, memory, and thought. Hillsdale, NJ: Erlbaum.
Anderson, J. R. (2007). How can the human mind occur in the physical universe? Oxford, England: Oxford
University Press.
Ashby, F. G., & Vucovich, L. E. (2016). The role of feedback contingency in perceptual category learning.
Journal of Experimental Psychology: Learning, Memory, and Cognition, 42, 1731–1746.
22 of 24 G. L. Murphy / Cognitive Science 43 (2019)

Baron, J. (2008). Thinking and deciding (4th ed.). Cambridge, England: Cambridge University Press.
Betsch, C., B€ohm, R., & Chapman, G. B. (2015). Using behavioral insights to increase vaccination policy
effectiveness. Policy Insights from the Behavioral and Brain Sciences, 2, 61–73.
Bradlow, A. R., & Bent, T. (2008). Perceptual adaptation to non-native speech. Cognition, 106, 707–729.
Collins, A. M., & Quillian, M. R. (1969). Retrieval time from semantic memory. Journal of Verbal Learning
and Verbal Behavior, 8, 241–248.
Damasio, A. R. (1994). Descartes’ error: Emotion, reason, and the human brain. New York: Putnam.
Daw, N. D., Gershman, S. J., Seymour, B., Dayan, P., & Dolan, R. J. (2011). Model-based influences on
humans’ choices and striatal prediction errors. Neuron, 69, 1204–1215.
Daw, N. D., & Tobler, P. N. (2014). Value learning through reinforcement: Basics of dopamine and
reinforcement learning. In P. W. Glimcher & E. Fehr (Eds.), Neuroeconomics: Decision making and the
brain (2nd ed., pp. 283–298). Amsterdam, the Netherlands: Academic Press.
Dunbar, K. (2001). The analogical paradox: Why analogy is so easy in naturalistic settings yet so difficult in
the psychological laboratory. In D. Gentner, K. J. Holyoak, & B. N. Kokinov (Eds.), The analogical
mind: Perspectives from cognitive science (pp. 313–334). Cambridge, MA: MIT Press.
Duncker, K. (1945). On problem-solving. Psychological Monographs, 58 (Whole no. 270), i-113.
Dunsmoor, J. E., Niv, Y., Daw, N., & Phelps, E. A. (2015). Rethinking extinction. Neuron, 88, 47–63.
Falkenhainer, B., Forbus, K. D., & Gentner, D. (1989). The structure-mapping engine: Algorithm and
examples. Artificial Intelligence, 41, 1–63.
Feeney, A., & Heit, E. (Eds.) (2007). Inductive reasoning: Experimental, developmental, and computational
approaches. Cambridge, England: Cambridge University Press.
Feigenson, L., Dehaene, S., & Spelke, E. (2004). Core systems of number. Trends in Cognitive Science, 8,
307–314.
Ferreira, F. (2003). The misinterpretation of noncanonical sentences. Cognitive Psychology, 47, 164–203.
Ferreira, F., & Nye, J. (2017). The modularity of sentence processing reconsidered. In R. G. de Almeida &
L. R. Gleitman (Eds.), On concepts, modules, and language: Cognitive science at its core (pp. 63–86).
Oxford, England: Oxford University Press.
Fodor, J. A. (1975). The language of thought. New York: Crowell.
Fodor, J. A. (1981). Representations: Philosophical essays on the foundations of cognitive science.
Cambridge, MA: MIT Press.
Fodor, J. A. (1983). The modularity of mind. Cambridge, MA: MIT Press.
Fodor, J. A. (1985). Precis of The Modularity of Mind. Behavioral and Brian Sciences, 8, 1–42.
Fodor, J. A. (1998). Concepts: Where cognitive science went wrong. Oxford, England: Clarendon Press.
Fodor, J. A., Bever, T. G., & Garrett, M. F. (1974). The psychology of language: An introduction to
psycholinguistics and generative grammar. New York, NY: McGraw-Hill.
Frederick, S., Loewenstein, G., & O’Donoghue, T. (2002). Time discounting and time preference: A critical
review. Journal of Economic Literature, 40, 351–401.
Garrett, M. F. (2017). Exploring the limits of modularity. In R. G. de Almeida & L. R. Gleitman (Eds.), On
concepts, modules, and language: Cognitive science at its core (pp. 41–62). Oxford, England: Oxford
University Press.
Gentner, D., Holyoak, K. J., & Kokinov, B. N. (Eds.) (2001). The analogical mind: Perspectives from
cognitive science. Cambridge, MA: MIT Press.
Gick, M. L., & Holyoak, K. J. (1980). Analogical problem solving. Cognitive Psychology, 12, 306–355.
Glimcher, P. W., & Rustichini, A. (2004). Neuroeconomics: The consilience of brain and decision. Science,
306, 447–452.
Henrich, J., Heine, S. J., & Norenzayan, A. (2010). The weirdest people in the world? Behavioral and Brain
Sciences, 33, 61–135.
Hertwig, R., Barron, G., Weber, E. U., & Erev, I. (2004). Decisions from experience and the effect of rare
events in risky choice. Psychological Science, 15, 534–539.
G. L. Murphy / Cognitive Science 43 (2019) 23 of 24

Holyoak, K. J., & Koh, K. (1987). Surface and structural similarity in analogical transfer. Memory &
Cognition, 15, 332–340.
Holyoak, K. J., & Morrison, R. G. (Eds.) (2005). The Cambridge handbook of thinking and reasoning.
Cambridge, England: Cambridge University Press.
Holyoak, K. J., & Thagard, P. (1989). Analogical mapping by constraint satisfaction. Cognitive Science, 13,
295–355.
Hummel, J. E., & Holyoak, K. J. (1997). Distributed representations of structure: A theory of analogical
access and mapping. Psychological Review, 104, 427–466.
Jackendoff, R. (2002). Review of The mind doesn’t work that way: The scope and limits of computational
psychology by J. A. Fodor. Language, 78, 164–170.
Kahneman, D. (2011). Thinking, fast and slow. New York: Farrar, Straus and Giroux.
Kaplan, A. S., & Murphy, G. L. (2000). Category learning with minimal prior knowledge. Journal of
Experimental Psychology: Learning, Memory, and Cognition, 26, 829–846.
Knoll, M. A. Z., Appelt, K. C., Johnson, E. J., & Westfall, J. E. (2015). Time to retire: Why Americans
claim benefits early & how to encourage delay. Behavioral Science & Policy, 1, 53–62.
Lewis, G., Poeppel, D., & Murphy, G. L. (2015). The neural bases of taxonomic and thematic conceptual
relations: An MEG study. Neuropsychologia, 68, 176–189.
Lin, E. L., & Murphy, G. L. (2001). Thematic relations in adults’ concepts. Journal of Experimental
Psychology: General, 130, 3–28.
Lombrozo, T. (2006). The structure and function of explanations. Trends in Cognitive Science, 10, 464–470.
Love, B. C., Medin, D. L., & Gureckis, T. M. (2004). SUSTAIN: A network model of category learning.
Psychological Review, 111, 309–332.
Maes, E., Boddez, Y., Alfei, J. M., Krypotos, A.-M., D’Hooge, R., De Houwer, J., & Beckers, T. (2016).
The elusive nature of the blocking effect: 15 failures to replicate. Journal of Experimental Psychology:
General, 145, e49–e71.
Malt, B. C. (1989). An on-line investigation of prototype and exemplar strategies in classification. Journal of
Experimental Psychology: Learning, Memory, and Cognition, 15, 539–555.
Malt, B. C., Sloman, S. A., Gennari, S., Shi, M., & Wang, Y. (1999). Knowing versus naming: Similarity
and the linguistic categorization of artifacts. Journal of Memory and Language, 40, 230–262.
Mani, A., Mullainathan, S., Shafir, E., & Zhao, J. (2013). Poverty impedes cognitive function. Science, 341,
976–980.
Miller, G. F., & Todd, P. M. (1998). Mate choice turns cognitive. Trends in Cognitive Sciences, 2, 190–198.
Miller, J. L., & Volaitis, L. E. (1989). Effect of speaking rate on the perceptual structure of a phonetic
category. Perception & Psychophysics, 46, 505–512.
Murphy, G. L. (2002). The big book of concepts. Cambridge, MA: MIT Press.
Murphy, G. L. (2007). Parsimony and the psychological representation of polysemous words. In M. Rakova,
G. Peth€o, & C. Rakosi (Eds.), The cognitive basis of polysemy: New sources of evidence for theories of
word meaning (pp. 47–70). Frankfurt am Main, Germany: Peter Lang Verlag.
Newell, A. (1990). Unified theories of cognition. Cambridge, MA: Harvard University Press.
Pardo, J. S., & Remez, R. E. (2006). The perception of speech. In M. J. Traxler & M. A. Gernsbacher (Eds.),
Handbook of psycholinguistics (2nd ed., pp. 201–248). London: Academic Press.
Patterson, K., Nestor, P. J., & Rogers, T. T. (2007). Where do you know what you know? The representation
of semantic knowledge in the brain. Nature Reviews: Neuroscience, 8, 976–987.
Ramiro, C., Srinivasan, M., Malt, B. C., & Xu, Y. (2018). Algorithms in the historical emergence of word
senses. Proceedings of the National Academy of Sciences, 115, 2323–2328.
Rogers, T. T., Lambon Ralph, M. A., Garrard, P., Bozeat, S., McClelland, J. L., Hodges, J. R., & Patterson,
K. (2004). Structure and deterioration of semantic memory: A neuropsychological and computational
investigation. Psychological Review, 111, 205–235.
Rosch, E. H. (1973). On the internal structure of perceptual and semantic categories. In T. E. Moore (Ed.),
Cognitive development and the acquisition of language (pp. 111–144). New York: Academic Press.
24 of 24 G. L. Murphy / Cognitive Science 43 (2019)

Ross, B. H. (1984). Remindings and their effects in learning a cognitive skill. Cognitive Psychology, 16,
371–416.
Rumelhart, D. E. (1980). Schemata: The building blocks of cognition. In R. J. Spiro, B. C. Bruce, & W. F.
Brewer (Eds.), Theoretical issues in reading comprehension (pp. 33–58). Hillsdale, NJ: Erlbaum.
Rumelhart, D. E. (1981). Understanding understanding (CHIP Technical Report 100). San Diego, CA:
Center for Human Information Processing.
Rumelhart, D. E., & Norman, D. A. (1981). Analogical processes in learning. In J. R. Anderson (Ed.),
Cognitive skills and their acquisition (pp. 335–359). Hillsdale, NJ: Erlbaum.
Schank, R. C., & Abelson, R. P. (1977). Scripts, plans, goals and understanding. Hillsdale, NJ: Erlbaum.
Shah, A. K., Mullainathan, S., & Shafir, E. (2012). Some consequences of having too little. Science, 338,
682–685.
Smith, J. D., Murray, M. J., & Minda, J. P. (1997). Straight talk about linear separability. Journal of
Experimental Psychology: Learning, Memory, and Cognition, 23, 659–680.
Smith, E. E., Rips, L. J., & Shoben, E. J. (1974). Semantic memory and psychological semantics. In G. H.
Bower (Ed.), The psychology of learning and motivation, Vol. 8 (pp. 1–45). New York: Academic Press.
Soto, F. A. (2018). Contemporary associative learning theory predicts failures to obtain blocking: Comment
on Maes et al. (2016). Journal of Experimental Psychology: General, 147, 597–602.
Spalding, T. L., & Murphy, G. L. (1996). Effects of background knowledge on category construction.
Journal of Experimental Psychology: Learning, Memory, and Cognition, 22, 525–538.
Steyvers, M., Lee, M. D., & Wagenmakers, E.-J. (2009). A Bayesian analysis of human decision-making on
bandit problems. Journal of Mathematical Psychology, 53, 168–179.
Swabey, P. (2015). A quantum of security. Downloaded from economist.com on 09/05/2018.
Thagard, P., Holyoak, K. J., Nelson, G., & Gochfeld, D. (1990). Analog retrieval by constraint satisfaction.
Artificial Intelligence, 46, 259–310.
Urcelay, G. P. (2017). Competition and facilitation in compound conditioning. Journal of Experimental
Psychology: Animal Behavior Processes, 43, 303–314.
Williams, J. J., & Lombrozo, T. (2010). The role of explanation in discovery and generalization: Evidence
from category learning. Cognitive Science, 34, 776–806.
Woodworth, R. S., & Shlosberg, H. (1954). Experimental psychology (rev ed.). New York: Holt, Rinehart, &
Winston.

You might also like