Professional Documents
Culture Documents
Ashley Chico
14 March 2021
Paper 1
There has been a long-standing debate among cognitive scientists as to which tools and
methods can best accurately evaluate mental representations. Since its conception, alternative
theories and assumptions have been theorized to help alleviate some of the ongoing challenges
faced in the philosophical hemisphere of science. One particular approach within the early phase
of Cognitive Science, CRUM, was developed to assist cognitive scientists in understanding the
framework of the human brain. However, I argue that CRUM does not adequately explain the
linguistic functions of the human mind. First, I will show this with the examination of CRUM’s
representational aspect through logic, rules and conceptual theories. Secondly, I will be
analyzing how the representational component of CRUM fails in evaluating the linguistics
element of the mind through Von Eckardt’s Substantive Assumptions followed by Pierce’s
triadic theory. Finally, I will disprove the accuracy of CRUM as an equal demonstration of
extended by Thagard as a method that holds two fundamental commitments: that the human
mind is representational and computational (Thagard, pg. 10). Within it, Thagard claims three
hypotheses: Hypothesis 1 assumes that knowledge in the mind consists of mental representations
(Thagard, pg. 5). Hypothesis 2 suggests that humans operate on representations through mental
procedures known as computations (Thagard, pg. 5). The third hypothesis, known as the Central
Hypothesis, combines the two ideas by stating that human mental operations conduct ‘thinking’
2
through computational procedures operating upon mental representations (Thagard, pg. 10). If
CRUM is an example of how the human mind ‘thinks’ through representation and computation,
then both elements should not fail. However, CRUM falls short of demonstrating the human
amount of ideas, symbols, concepts, etc. using similar deductive structures. The issue where
representation through logic falls short is that it is not possible to input representations that do
not exist within the system. For example, if I required CRUM to compute a representation for
Similarly, rule-based structures allow for interpretations of exceptions for logical deductions that
are necessary, but fail to representationally account for linguistic suggestions such as with the
use of sarcasm. Furthermore, CRUM representationally weak conceptually under the majority of
Under Thagard’s Schema of Concepts, he indicates four theories of how CRUM could
equal mental representations of the human mind. Concept 1 holds that an idea is inclusive of all
that is associated with it, for example, H20 is the scientific term, chemical components, and other
relevant information of H20. Yet, if the term to be computed were a new concept, such as
‘yertum’, it cannot provide a representation since holds no meaning it can interpret. This is
different from say Concept 3 which accounts for content yet to be determined, to which
representation also falls short, because under Concept 1, I have given ‘yertum’ a definition that is
simply unknown to CRUM, but is in existence. Since CRUM is unable to interpret the notion
that I have come up with this concept and cannot represent it in a sentence using non-computable
Concept 2 argues that concepts are representations of categories in the mind. If applied to
the term ‘pet’ for ‘an animal that is domesticated and sniffs the ground’, both a dog and a cat
would qualify and most likely be represented under CRUM. If, however, I was to reverse the
computational demand of CRUM and ask for it to represent a result for, “a domesticated animal
with fur, whiskers, and paws that frequently pounces”, CRUM could not produce one answer of
a dog over a cat because anatomically both could pounce. Even if additional exceptions were
provided by rules with ‘sniffing the ground’, ‘hairless’, ‘playful’, it would only revert to larger
concepts such as ‘pet’ and fail to deliver a linguistic equal than if I were to communicate with a
human for which ‘cat’ would be the most likely answer by word-association by ‘pounce’. As
with Concept 4, I believe this function could be possible if CRUM possessed the linguistic
capability to translate across languages, such as with “car” and “carro” which would both fall
under the original language represented for ‘automobile’. This would explain how similarly a
human brain of a bi-lingual person can interchange representations for multiple versions for ‘car’
The Classical Theories do not provide answers to the unresolved cases left by some
words unable to be represented via CRUM. One requirement of the first Classical View is that
every category has a necessary and sufficient condition which, as stated in (Lecture 3, pg. 32), is
not always possible. An example of this would be trying to define the categories of “explorer”,
“abstract”, or “astounding”. With regards to the remaining two Views, the linguistic issues are
the same; not all concepts have definitions. Furthermore, concepts do not necessarily require
definitions for the human mind to communicate linguistically as with, for example, the
understanding of the basic action ‘drink water’ if I wrote ‘drink the water inspiringly’. CRUM
could function under the view of the Probabilistic/Prototype View because instead of a strict
4
definition of a category, it is based upon the degree of characteristics surrounding the term.
However, since the Probabilistic/Prototype View was invented in direct response to the Classical
Views and remains unable to explain specific interpretations of concepts e.g. scaling levels of
pain, it sustains itself as an unreliable mechanism for CRUM to rely on when it comes to the
CRUM as a viable and reliable source of how the human mind represents mental representations.
Two sub-assumptions are critical for CRUM to function as studied by Von Eckardt. The first is
the Substantive Assumption R2.2 which states that representations have semantic properties that
include meaning behind each representation (Eckardt, pg. 51). Secondly, R2.4 focuses on the
inclusion of an interpreter that focuses on mental representations being significant for those
whom the mind resides (Eckardt, pg. 51). When analyzed under how CRUM functions under
linguistics, both of these assumptions are necessary for CRUM to be viable. This is because
anything that is a mental representation must have meaning to work in instances where other
theories fall short, e.g. drinking water inspiringly, and make representational sense to the
Furthermore, if CRUM were to operate under Pierce’s triadic relation in which something
work (Eckardt, pg. 145). This is because it is up to the interpretant, object, and sign relationship
elements. However, CRUM is unable to explain the human mental representations of linguistics
under this theoretical framework because it is theoretically assumed, in line with multiple
competing theories of CRUM, and requires that the two aforementioned assumptions to exist for
5
CRUM to function representationally. There is a final theory that could determine whether
CRUM can explain how the human mind represents human linguistics or not through Artificial
Intelligence.
Searle and the Churchlands featured competing views on the viability for CRUM to
operate in relation to the Turing Machine. The machine possesses the capability to represent
‘things’ (representations) and compute them (computational) to carry out an activity (concepts).
The argument of the Turing Machines is that if it operates using CRUM, and the machine is
successful, then CRUM is successful in explaining the human mind. To test this theory, the
imitation game was invented in which the Turing Test is passed if the human interacting with the
device believes they are interacting with another human. On multiple occasions, the test has been
passed, but fails at another key Turing Test scenario known as the Chinese Room argument.
Searle gives an example of a room in which the goal of the machine is to translate Chinese
characters to English and if successful proves that it has semantic capabilities (Searle, pg. 27).
Since it fails at hosting semantic properties when tested, Artificial Intelligence, then, is not a
Recalling that one of the assumptions needed for CRUM was R2.2 in which mental
representations have semantic properties that have meaning behind them. Since the assumption is
theoretical and since there was no meaning behind the exchange of symbols, it was simply
computing without purpose and failed to regulate any meaning similar to that of a human brain.
Churchlands disagreed with Searle under the argument that simply because the syntax is
immeasurable, it should not discount the presumption that the ability to produce semantic
properties as an Artificial Intelligence does exist (Churchlands, pg. 33). Moreover, they
6
suggested that an equal measure of the biology-based human mind would be an equal technology
based on biology thus equalizing the measurement through the same architecture (Churchlands,
pg. 36). A primary issue with this stance is that since the technology has yet to be invented to
conduct this measurement and therefore cannot be supported. Thus, both Artificial Intelligence
and CRUM fail to explain the human mind linguistically through mental representations.
In conclusion, CRUM does not explain the human mind. The first defense for this point
was cited through the examination of logic, rules and conceptual theories. Within each, CRUM’s
mental representations were proven to be an inadequate source of explaining the human mind
when analyzed under Thagard’s Schema of Concepts, the Classical Views and
Probabilistic/Prototype theories. The second defense was introduced with Eckardt’s Substantive
Assumptions that proved CRUM was unable to function without representational assumptions.
Pierce’s Triadic theory was introduced as a potential method for CRUM to work, but failed
because of its lack of cohesive support from cognitive scientists and reliance of assumptions. A
final defense for Artificial Intelligence being a potential method for CRUM to work was
disproven under the Chinese Room argument by Searle. A lack of semantic capability was the
primary reason for this result. A counter-argument was examined by Churchland’s argument for
testing representations under similar bio-technology, but failed because it has yet to be tested. It
is for all these reasons that CRUM cannot explain how the human mind represents linguistics.