Attributing Intentionality To

You might also like

You are on page 1of 63

ATTRIBUTING INTENTIONALITY TO ARTIFICIAL INTELLIGENCE: AN

INVESTIGATION

By

Matthew David Johnson

BS, University of Oklahoma, 2015

MASTER’S THESIS

SUBMITTED IN FULFILLMENT OF THE REQUIREMENTS

FOR THE

DEGREE OF MASTER OF PHILOSOPHY

IN THE DEPARTMENT OF

GRADUATE STUDIES OF ARTS AND SCIENCES AT

FORDHAM UNIVERSITY

NEW YORK

MAY, 2023
TABLE OF CONTENTS

I: Introduction -1

II: Phenomenological Intentionality - 5

III: Analytic Intentionality - 12

IV: The Intentional Stance - Systematic Intentionality - 17

V: AI - Convolutional Neural Networks and Their Faults - 23

VI: Can AI Have Intentionality - 30

VII: AI Intentionality - The Need for the Intentional Stance - 34

VIII: Modified Intentional Stance - 39

IX: Test and Speculation - 52

X: Conclusion - 56

XI: Bibliography - 57

XII: Abstract

XIII: VITA
I: INTRODUCTION

In 1956 John McCarthy coined the term Artificial Intelligence (AI). His and his

colleague’s hopes, largely optimistic, were to replicate human intelligence utilizing computer

technology within a decade.1 Almost 70 years later, the advancements in McCarthy’s mission

have been celebrated, demonized, and sensationalized in every form of media available - from

zealous claims in science fiction horror to serious real-world concerns regarding privacy,

information, and bias. However, the nature of Al and the kinds of questions that are posed by

Sci-Fi and newer technologies have not themselves been fully answered. Will the horrific

landscape of Terminator or The Matrix present itself as a real possibility? Can a machine

experience belief or emotion? What is modem Al thinking about? Each of these questions

prompts us to look at our own minds and thought processes for some kind of resolution. Where

the depictions of our own thought processes and object recognition are fundamentally being

recreated in AI systems. However, to what extent do modem AI systems actually represent a

core cognitive function that appears fundamental in our everyday lives? Does AI have

intentionality? The very mechanic that we use to describe our own thought. Can we attribute

some variation of intentionality to these technological machines? In this paper, I intend to

discuss this very question. I will present three perspectives of intentionality and categorize them

by various functions that will allow me to translate them to AI structures, specifically

convolutional neural networks, and their core abilities with their faults. The leading theory that I

argue as the rational theory to attribute is named the intentional stance, however, it requires some

added

^Melanie Mitchell, Artificial Intelligence A Guide for Thinking Humans. (Great Britain: Pelican
Books, 2019), 4-5

1
2

qualifications. Ultimately, attribution of intentionality to the modem marvel of AI is something

that will occur quite passively and certainly not quickly.

The discussions revolving around AI tend to focus on the attribution of consciousness,

however; In any of these arguments’ end-state, the positions revolve around the same concern.

Namely, the issue of the philosophical zombie. The position of this thought experiment focuses

on an unresolvable condition. A philosophical zombie is a hypothetical being characterized as a

normal, living human being who participates in every activity that a normal human would do.

They have favorite movies and music, commute to work, and have intimate relationships with

family and friends. The caveat is that they are completely unconscious. The philosopher’s

dilemma is to describe how we can identify consciousness without correlating it to average

everyday behaviors. The unresolvable condition is that in order to preface the philosophical

zombie as being, it comes with the position of skepticism about every single person you have

ever met. When and how can you verify those individuals’ consciousness without attributing it to

some behavior set? This concern pulls away from current concerns regarding new and improved

AI systems. Trying to explain or attribute consciousness to AI, although important and

philosophically relevant, is getting ahead of itself. These systems don’t behave exactly like

humans and before we can reasonably take a position on consciousness, I suggest we ask if Al

can see, believe, or express. The focus shifts from whether or not AI is “awake” to what qualifies

as an intentional object and how one operates with it. My intent in discussing intentionality is to

ascribe a mental process that may not necessarily need consciousness in order to function. In

order to do that I will begin with what intentionality is and how it has been incorporated into

human beings.
3

Intentionality, in its contemporary use, does not refer to intentional action - at least not

directly. To have intent or intend something, in common language, is to prescribe a certain

behavior toward an expected action. In other words, you have a purpose for acting in a certain

way. Intentionality, as a mental phenomenon, refers to a much broader category. This category is

defined by the content of a thought, its directedness, or aboutness. Behavior expectations can be

a part of intentionality, but so is thinking of the color red or the consuming thoughts of grief.

Intentionality is a descriptor of how thought functions in the mind, not just what we do because

of our thoughts. Franz Brentano is credited with bringing intentionality into the contemporary

study of philosophy. Since his explanation in Psychologyfrom an Empirical Standpoint in 1847,

the functions of this mental phenomenon have fragmented significantly into different mechanics

or methods that hope to provide insight into this perplexing mental description. However, the

broad definition of what intentionality describes has stayed the same. Prior to presenting a few

positions of intentionality, I want to make clear the process by which I intend to employ these

theories. The Stanford Encyclopedia of Philosophy’s entry on consciousness and intentionality

provides an excellent narrative by which I can use various theories of intentionality. Each theory

presented can, in some capacity, be described through three major components: detachability,

reflexivity, and basic forms. All of which describe intentionality along a particular spectrum. (1)

Detachability asks whether or not the object of thought is required to be present in real life for

you to intend or direct your thought toward that object. (2) Reflexivity refers to the relationship

between the conscious mind and intentionality. Does the mind have to be conscious in order to

have some intentional position? Is there a pre-requisite of self-consciousness that determines an

ability to have intentionality? (3) Lastly, is there only one kind ofform intentionality can take? Is

there a requirement for a subject and an object to be present for any individual to intend
4

something? Categorizing the preceding three types of intentionality by these components will

allow me to readily assess their compatibility with modem AI and help elucidate what facets are

missing from these modem machines.2 Brentano’s vision is a natural starting point for this

investigation. He along with Edmund Husserl facilitates the phenomenological interpretation of

intentionality.

^Charles Siewert, “Consciousness and Intentionality”, Stanford Encyclopedia of Philosophy,


Stanford, Summer 2022 Edition, https://plato.stanford.edu/entries/consciousness-
intentionality/#toc
II: PHENOMENOLOGICAL INTENTIONALITY

Franz Brentano’s description of intentionality is directed towards the inner world of our

minds as he works to provide a universalized definition of mental content in Chapter 1 of his

aforementioned book.

Every mental phenomenon is characterized by what the Scholastics of the Middle Ages
called the intentional (or mental) inexistence of an object, and what we might call, though
not wholly unambiguously, reference to a content, direction toward an object (which is
not to be understood here as meaning a thing), or immanent objectivity. Every mental
phenomenon includes something as object within itself, although they do not all do so in
the same way. In presentation something is presented, in judgement something is
affirmed or denied, in love loved, in hate hated, in desire desired and so on.3

This definition of intentionality is akin to what has been provided previously, however, it also

begins to highlight one of the categories presented above, namely the forms of intentionality. The

form of intentionality in Brentano’s description is rendered by subject-object distinction.

Regardless of the content, there is always an object to which a subject is directing its mind

toward. He further elucidates his position through the work of Sir William Hamilton, A British

diplomat from the late 19th century. Hamilton’s position agrees with Brentano’s. However,

Hamilton finds an objection, specifically, there cannot be an object for emotional content.

Brentano diminishes this concern by highlighting the language we use. For example, the eerie

scratch of a chalkboard may create a feeling of discomfort, and my enunciation of “I don’t like

that” shows that my emotion is linked to some object, specifically the screech of the chalk. This

kind of example expresses Brentano’s idea but highlights Hamilton’s concern. How can my

3Franz Brentano, Psychology from an Empirical Standpoint. (New York: Routledge Classics,
2015), 109

5
6

emotional experience of an object be attributed to this produced sound? Brentano agrees with

Hamilton, and he recognizes that some objects and their experiences may appear to “fuse” such

as pleasure or pain garnered from a sound. The experience is associated with hearing, not the

sound itself. “The object to which a feeling refers is not always an external object.”4 However,

the form of intentionality is left unchanged because the object in question lives in the realm of

‘mental inexistence’. Therefore, the emotional component of intentionality is left intact. To

further his intentional descriptions, Brentano transitions towards the importance of

consciousness.

Brentano’s analysis of consciousness includes a very high degree of reflexivity. The

description that is promulgated by Hamilton, in Brentano’s account, is that there are two kinds of

perception: internal and external. Only the inner consciousness can interpret internal perception.

To this position, Brentano states that “inner perception is not merely the only kind of perception

which is immediately evident; it is really the only perception in the strict sense of the word” and

“mental phenomena, therefore, may be described as the only phenomena of which perception in

the strict sense of the word is possible.”5 Brentano’s view is that there is only one kind of

perception, and thus only one kind of consciousness by which mental phenomena can be

supplied. The pre-requisite of consciousness is a necessary component in Brentano’s intentional

system. The lasting question is whether or not the object of the mental phenomenon has to be

present in reality in order to have an intentional position toward it. Brentano’s discussion on

detachability purports that no requirement exists.

4Franz Brentano, Psychology, 94


5Franz Brentano, Psychology, 95
7

The description of detachability in Brentano’s work is largely a refutation of another

viewpoint. Specifically, the view of Alexander Bain, a contemporary British empiricist who

presents the case as a contradiction between the physical phenomena and the intentional

experience. The argument is that the two components must exist together but cannot because the

moment the intentional experience is not present, the physical phenomena now exists as an

unperceived object in one’s mind. Brentano quotes Bain in highlighting this.

There is a manifest contradiction in the supposition; we are required at the same moment
to perceive the thing and not to perceive it. We know the touch of iron, but we cannot
know the touch apart from the touch.6

Brentano, remarks on this concept using an analogy to color. He states.

It is undoubtedly true that a color appears to us only when we have a presentation of it.
We cannot conclude from this, however, that a color cannot exist without being
presented.7

Ultimately, Brentano doesn’t provide an absolute description of detachability. Rather, he simply

doesn’t think that the logic presented to make a claim like Bain’s is accurate. The difference

between a physical phenomenon and an intentional one is just a matter of comparison. “When we

compare one with the other we discover conflicts which clearly show that no real existence

corresponds to the intentional existence in this case.”8 His description allows us to infer, that

there are some mental phenomena that are completely detached and not dependent on an existing

real entity.

6Franz Brentano, Psychology, 97


7Ibid
8Franz Brentano, Psychology, 98
8

In summary, Brentano’s work is just the foundation of the phenomenological position.

From this start, we see a singular object-subject form bonded by the “mental inexistence” of an

object, wholly reflexive and holding tentative detachability. Edmund Husserl, attributed to the

founding of phenomenology, furthered Brentano’s position and reinforces some of this

conceptualization of intentionality.

Husserl’s work, as it pertains to intentionality, can be best described through his method,

the phenomenological reduction or the epoche ’. A process by which any individual, through a

kind of meditation, can make known the substrate object of their intentional thought. In his book

titled, Ideas for a Pure Phenomenology and Phenomenological Philosophy Book 1, Husserl sets

out to establish the natural attitude. This position is a foundational human perspective that

intuitively describes one’s everyday experience. The natural attitude is a description of three

major areas: the environment of objects that one finds themselves in, The presence of the

egotistical “I” in relation to the environmental objects, and the presence of other egotistical

beings. These areas, simply described, are as follows: Our environment is primarily

spatiotemporal and extends into an infinite resolution. The presence of the “I” or the “cogito” as

Husserl states, is completely encompassed in the natural attitude, regardless of my own

awareness of it. Lastly, individuals attribute these same two foundational components to other

human beings.9

Once the natural attitude has been established, Husserl describes the process of

“bracketing”. The aim is to restrict or remove the ancillary components of an object. The

9Edmund Husserl, Ideas for a Pure Phenomenology and Phenomenological Philosophy: First
Book: General Introduction to Pure Phenomenology (Indianapolis: Hackett Publishing
Company, 2014), 48-55
9

environment includes the objects, but also a plethora of qualities and contingent expectations tied

to or attributed to an object as it is experienced in the natural attitude - ultimately obscuring it

from a purely phenomenological state. The method of “bracketing” allows us to enter a state of

pure consciousness by which the intentional objects can be analyzed. Husserl, quoted below,

provides an example of what could be “bracketed” within the context of the natural attitude.

I suspend all sciences related to this natural world, regardless of how firm a standing
they have for me, how much they amaze me, or how little I think of raising even the
slightest objection to them. I make absolutely no use of their valid results. [57] I refrain
from adopting a single proposition that belongs to them, even if the evidence for it is
perfect; no such proposition is taken up by me, none provides me a foundation - so long,
it bears noting, as it is understood as it presents itself, in these sciences, as a truth about
actualities of this world. lam permitted to assume it only after I have bracketed it.10

By dispensing with the natural attitude in this particular way. The realm of pure consciousness

can then operate unencumbered. Then, intentionality operates as the mechanic by which we

navigate the realm of pure consciousness. In this explanation, Husserl gives insight into the role

of reflexivity in his intentional system. Husserl describes intentionality as something

consciousness does. An action it performs. He states that “inherent in the cogito itself and

immanent to it is a “focus on” the object, a focus that, on the other hand, springs forth from the

“ego” that thus can never be missing.” This distinction highlights that Husserl’s depiction of

intentionality correlates with Brentano’s, in that, it shares a high degree of reflexivity and is

dependent on consciousness.11 As Husserl continues to explain intentionality in the context of his

reduction, we see correlates to both detachability and the form of intentionality.

“Edmund Husserl, Ideas, 56


"Edmund Husserl, Ideas, 59-60
10

Although Brentano alludes to a detachable system, Husserl fully embraces it, taking a

more distinct and direct position. He describes this relation to the psychological study of

“experience” and “subject”.

an experience is consciousness for something, for example, that a fiction is a fiction of a


specific centaur but also that a perception is perception of its “actual” object, a judgment
is judgment of its state of affairs and so forth - that has no bearing on the experience as a
factum in the world, specifically in the factual, psychological connection.12

When evaluating the intentional object in pure consciousness, its relationship to reality is

irrelevant because the focus of the epoche’ is on the experience or the givenness of the object in

question. Husserl refers to this experience of the intentional object as the object’s noema or the

noetic experience.13 What this means is that you can have noetic content without having the

physical reality of the object in question.14 This description can be presented through an

imaginary circumstance. I can readily imagine some contrived alien with hundreds of eyes. This

imaginary thought could give me a glimpse of a myriad of intentional objects as they relate to the

alien. My fear or concern, the alien itself, what actions I would perform if I were to meet such a

being, and so on. According to Husserl, I should be able to perform the epoche’ with the

imagined creature as the focus point and identify the noetic content of each of the intentional

objects. However, this unreal object (our alien) does not exist in reality. Thus, completely

detached from the literal experience of the aforementioned alien. This discussion point draws

upon the last category, namely, Husserl’s discussion on the form of intentionality. The alien in

question also does not exist in “mental inexistence” as Brentano would have argued.

"Edmund Husserl, Ideas, 63


13Edmund Husserl, Ideas, 167
"Charles Siewert, “Consciousness and Intentionality”
11

Broadly speaking, Husserl and Brentano share the same form of intentionality as an

object-subject relationship. The difference is that Husserl doesn’t place objects, like the imagined

alien, into a separate category as Brentano did, he attributes their position to transcendence. The

noetic content of some intentional object operates as a culmination of events, the spatiotemporal

experience, and our memories. All of which causes the object to appear, not as a singular real-

world object, but as an experiential and culminated object, one abstracted from our experiences

and perceptions. This intentional object is what we are subject to in Husserl’s depiction. Husserl

states that “the measure for all rational assertions about transcendence, is itself to be gathered

from nowhere else than from the essential content of perception or, better, from the specific

kinds of connections that we call “identifying [or “ostensive”] experience.”15 Ultimately, this

means that my transcendental alien was likely drawn from some science fiction movie I

previously viewed vice a literal experience of one. The removal of ‘mental inexistence’ does

change the form of intentionality, but the broad definition of subject-object relationship is still

readily present and will be the primary description I will use to compare with AL

The phenomenological approach to intentionality, as informed by Brentano and Husserl,

has yielded a distinct position that is characterized by a requisite quality of consciousness, an

ability to detach the object of discussion from reality, and a form that demands the presence of a

subject and an object. This approach, although human-like in most of its details, is not the only

depiction of intentionality. The analytic approach begins from an entirely different foundation

and contrasts with the internal, mind-centric, depiction of phenomenology. Instead, the analytic

approach shifts its focus to an externally informed understanding.

^Edmund Husserl, Ideas, 86


III: ANALYTIC INTENTIONALITY

In the analytic tradition, the narrative for understanding intentionality is not as

straightforward as the phenomenological. To attribute to one or two individuals who made

claims on intentionality in the analytic tradition is to disregard a plethora of contributors. In order

to summarize analytic intentionality within the context of the three major descriptors, I will enlist

the aid of two schools of thought that permeate analytic descriptions in the philosophy of the

mind: behaviorism and functionalism.

In A BriefHistory ofAnalytic Philosophy: From Russell to Rawls, Stephen Schwartz

provides a useful foundation for the treatment of the analytical tradition.

The name “analytic philosophy” refers more to the methods of analytic philosophy than
to any particular doctrine that analytic philosophers have all shared. An analytic
philosopher analyzes problems, concepts, issues, and arguments. She breaks them down
into their parts, dissects them, to find their important features. Insight comes from seeing
how things are put together and how they can be prized apart; how they are constructed
and how they can be reconstructed. Symbolic logic was and remains the most distinctive
tool of analytic philosophers.16

Logic, operating as a foundational tool and method, focused its dissection of philosophy on

various philosophical positions. It formulated the utterances of individuals into mathematical

variables, otherwise referred to as “propositions” in order to better interpret philosophical claims.

This use gave rise to the logical positivist movement which formalized the verifiability criterion

of meaninglessness. This criterion states “that a sentence is factually significant to any given

16Stephen Schwartz, A Brief History of Analytical Philosophy From Russell to Rawls (West
Sussex, UK: John Wiley & Sons Inc., 2012), 3

12
13

person if, and only if, he knows how to verify the proposition which it purports to express.”17

Unfortunately, given this principle, “most utterances of metaphysicians, ethicists, aestheticians,

and theologians do not meet this criterion, therefore they are meaningless. Or so the logical

positivists claimed.”18 This principle of verifiability is the driving piece behind behaviorism. The

only thing that can be verified is the behavior that people display, not the supposed mental

content that Husserl and Brentano took to be foundational. Because of this treatment of mental

content, reflexivity in intentionality is not readily apparent in the behaviorist depiction of reality.

Conscious states of mind are not necessary to analyze behavior. The other two descriptions of

intentionality can be described succinctly under the intentional behaviorist conception supplied

by Gordan R. Foxall.

In his article titled Ascribing Intentionality, Foxall provides a description of intentional

behaviorism. He states that “intentional behaviorism is a linguistically based approach to

explanation.”19 This perspective supplies an issue with uncovering the two other descriptors of

intentionality.

Intentionality itself is viewed within it as an entirely linguistic matter and it makes no


ontological assumptions about its nature, extent, or incidence. Science is a matter of using
language in a particular way; radical behaviorism strives to do so without recourse to
intentional idioms, but it is ultimately unable to achieve this.20

What this means is that there is no form or detachability that can be described under the

behaviorist’s conception. Intentionality is a topic to be eliminated, not described, or categorized.

"Stephen Schwartz, A Brief History, 60


18Stephen Schwartz, A Brief History, 61
^Gordon Foxall, “Ascribing Intentionality.” Behavior and Philosophy 37 (2009), 220
20Ibid
14

Its existence is merely a function or byproduct of the language we use. Thus, mental content

along with intentionality became a taboo subject. The analytical tradition hardly agreed with the

conclusions of behaviorism, however, it provides a starting position that helps illuminate what

intentionality may look like for the analytic philosopher. Functionalism was the response that

helped bring mental content back into the discussion.

Schwartz states the “the problem with behaviorism is that it is not possible to define

mental terms using only observable behavior and dispositions to behave.”21 In order to re­

interpret mentality, Functionalism posited that

mental states (events, properties, processes) are not identical with brain-states, nor are
they dispositions to behave; rather, mental states are functional states of an organism. A
functional state is defined by what it does and its relations to mental states. It is a state
that brings about or causes the behavior of an organism under specific conditions.22

Schwartz explains that causality is the prime mover for mental states. Pain, for example, is a

functional state that is attributed to conditional, external events, it is further supplied by any

other functional states that may be occurring at the same time. Much like vector addition in

kinematic physics, the combination of these varied mental states dictates the kind of behavior

that is likely to result. This presentation of mentality better informs what intentionality may look

like for analytic philosophers.

First, much like behaviorism, reflexivity is not directly addressed. The relationship

between consciousness and intentionality can be described as nothing more than an interaction of

two functional or mental states. The kind of interaction is going to be rooted in causal predicates

that correlate with neurophysiological components, analogous to a computer system. In this case,

21Stephen Schwartz, A Brief History, 184


22Stephen Schwartz, A Brief History, 185
15

Schwartz directly states that “consciousness does not play a role in functional explanations.”23

However, the other two components: form and detachability can be attributed in the functionalist

position.

The functional state is, by the aforementioned definition, causally driven. This means that

intentionality as a functional state shares a very low degree of detachability. The entire

functional state supervenes the physicality of any unique experience. For me to experience pain,

the presence of some pain-causing agent must be present in the neurophysiology of my brain.

This further highlights the external perspective of intentionality and makes the experience a

contingent phenomenon. This perspective fundamentally changes the form of intentionality, due

to the causal narrative presented by the functionalist perspective, subject and object present

themselves as scientific distinctions, not specifically separable.

The distinction in the forms of intentionality contrasts with Brentano. Remember the

distinction between hearing the chalkboard screech versus the physical sound of the chalkboard

screech. Brentano attributed this to a kind of “fusing” and attributed the form to a subject-object

relationship. In the analytical sense, the question presents the need for discussing the topics from

two perspectives: the physical and the sensational. In the case of pain, the form of intentionality

is causally related, and our intentional state is directed at the neural firings that attend to pain

receptors, however, the sensory depiction of pain relies on cognition and conceptual formation

usually attributed to language and completely dependent on the causal factors of the physical

intentionality. There is no need for a subject-object distinction, rather there is just a continuum of

23Stephen Schwartz, A Brief History, 188


16

causally related events, there is no interoperability between a supposed subject and the

represented object.24

In summary, the historical analytic position of intentionality is primarily characterized by

a complete lack of reflexivity, a very low degree of detachability from the real world, and a form

that is characterized by a kind of monism that fights against the traditional duality of subject and

object. Following the functionalist movement and the resurgence of the topic of consciousness.

The need to explain intentionality from a differing perspective rose from the analytical tradition.

Incorporating both the phenomenological and the analytic, Daniel Dennett’s work in The

Intentional Stance and its sequel Consciousness Explained provides a unique perspective that

builds on both these approaches to intentionality.

^Charles Siewert, “Consciousness and Intentionality”


IV: THE INTENTIONAL STANCE - SYSTEMATIC INTENTIONALITY

Dennett’s position on intentionality can be broadly described as an amalgamation of the

two aforementioned schools of thought. Although a self-proclaimed verificationist25, Dennett’s

explanation of intentionality does not conform intuitively with previous schools of thought. Due

to the reasoning of behaviorism and functionalism, mental content became an under-valued

venture. By addressing mentality in cognitive terms, Dennett works to salvage the loss of mental

content and subjectivity. However, he maintains an “objective, materialistic, third-person

world[view] of the physical sciences.”26 In order to accomplish this goal, Dennett doesn’t

presuppose the same information that the other theories do. The phenomenological starting

position affirmed consciousness as its starting point, while the analytic approach began with

methods directed at language and logic. Dennett begins with stances. Otherwise known as

subjective positions that individuals take when observing objects in our reality.

Dennett arranges his stances into three separate categories: the physical, the design, and

the intentional stance. The distinguishing character between them is a matter of complexity. The

physical stance is to “use your knowledge of the laws of physics to predict the outcome for any

inputs”27 For Example, the formulas given in kinematic equations define a projectile’s movement

through space or the development of medicines for specific treatments would operate under the

physical stance. This position has limitations, however, and Dennett utilizes Laplace’s Demon28

“Daniel Dennett, Consciousness Explained, (New York: Little, Brown and Company, 2017), 456
“Daniel Dennett, The Intentional Stance, (Cambridge: MIT Press, 1989), 5
^Daniel Dennett, Intentional, 16
“Pierre-Simon Laplace’s thought experiment characterized by a completely causally
deterministic world. Theoretically, knowing the exact position and velocity of every particle in
the given universe.

17
18

as the prime example. In theory, I could attempt to define every single phenomenon by the

position of every single particle in space, however, the practicality of doing so is literally

impossible. In those cases where the data for definition overwhelms, we transgress toward the

design stance. From this stance, we attribute principles of design in order to identify the expected

behaviors of objects. Dennett proposes the alarm clock as an example.

Almost anyone can predict when an alarm clock will sound on the basis of the most
casual inspection of its exterior. One does not know or care to know whether it is spring
wound, battery-driven, sunlight powered, made of brass wheels and jewel bearings or
silicon chips - one just assumes that it is designed so that the alarm will sound when it is
set to sound.. ,29

Treating objects from the design stance allows us to reasonably assess expected behavior

governed by the literal features of how an object operates. Following this train of thought, the

intentional stance broadens its view to further account for more complex systems of behavior.

Instead of predicting behavior from the laws of nature or the functional design of the system,

Dennett proposes that we assign rational agency to the object, including other humans, and make

our predictions based on that object’s beliefs, desires, and goals that it “ought to have”30. From

the intentional stance, we are then able to operate with rational agents along a kind of uniform

plane, sharing a common pool of beliefs and desires that we each attribute to one another. A few

examples might be sleeping when tired, eating when hungry, or getting frustrated when someone

cuts you off in traffic. This attribution process has limitations on specificity; however, Dennett

explains that without language, our ability to attribute these qualities would surely be hindered.

The capacity to express desires in language opens the floodgates of desire attribution. “I
want a two-egg mushroom omelette, some French bread and butter, and a half bottle of
lightly chilled white Burgundy.” How could one begin to attribute a desire for anything
so specific in the absence of such verbal declaration? How, indeed, could a creature come

^Daniel Dennett, Intentional, 17


30Daniel Dennett, Intentional, 17
19

to contract such a specific desire without the aid of language? Language enables us to
formulate highly specific desires.31

This restricted sense of intentionality ultimately means that the intentional objects that fill our

mental content are significantly less than what the phenomenological approach would endeavor

to claim. Our intentional position is only directed at the object we choose to attribute rational

agency to vice any given object of sensation. Because of this orientation of intentionality,

categorizing Dennett’s theory by reflexivity, detachability, and form yields a significantly

different perspective than both the phenomenological and analytical theories. In order to provide

a cohesive description. I will first begin with consciousness, a topic that Dennett spent years

developing.

Reflexivity in Dennett’s work is very high; however, it does not share the same kind of

relationship as Husserl and Brentano. The intentional stance is the foundational component of

self-consciousness and consciousness. In his words “selves and minds and even consciousness

itself are biological products.”32 Products that rely on a fundamentally material world. In the

appendix of Consciousness Explained Dennett credits his theory of content, the intentional

stance, to his theory of consciousness.

My fundamental strategy has always been the same: first, to develop an account of
[mental] content that is independent of and more fundamental than consciousness - an
account of content that treats equally all unconscious content fixation (in brains, in
computers, in evolution’s “recognition” of properties of selected design) - and second to
build an account of consciousness on that foundation.33

31Daniel Dennett, Intentional, 20


32Daniel Dennett, Consciousness, 421
33Daniel Dennett, Consciousness, 456
20

Consciousness is entirely dependent on the intentional stance in Dennett’s framework. And this

position highlights a mediation between the phenomenological and analytical approaches. It

supports the existence of mental content without sacrificing its materialistic foundations.

However, just because mental content has been re-introduced, does not mean that the duality of

subject-object form has been re-established.

Dennett’s form of intentionality is treated as a biological function. Attributing objectivity

or subjectivity is really just a matter of perspective.

Here is how it works: first you decide to treat the object whose behavior is to be predicted
as a rational agent; then you figure out what beliefs that agent ought to have, given its
place in the world and its purpose. Then you figure out what desires it ought to have, on
the same considerations, and finally you predict that this rational agent will act to further
its goals in the light of its beliefs.34

In other words, how we as humans make use of the intentional stance. In some respects, taking

the intentional stance can be described as functioning by design or as Dennett’s previous remarks

clearly indicate, through a complete physical description. He maintains the analytical description

of the form, which is singular and uniform. There is no subject or object to dissect. It wholly

depends on an individual’s attribution of intentionality. The last question to ask is how mental

content can be attached to the physical world. It would seem that if the form is continuous, then

the dream I had about a purple elephant who wanted to fly must have been of some real object to

which I ascribed the intentional stance.

To make sense of detachability in the intentional stance, Dennett recreates

phenomenology and creates his own kind of bracketing methodology. Through a new kind of

phenomenology that he calls heterophenomenology. A method that operates as a “neutral path

^Daniel Dennett, Intentional, 17


21

leading from objective physical science and its insistence in the third-person point of view, to a

method of phenomenological description that can (in principle) do justice to the most private and

ineffable subjective experiences.. .”35 The intentional objects of our thought are not in fact,

guaranteed to correlate with the physical properties of the world just because there exists a

continuous form. Husserl’s phenomenological reduction suggested that we bracket the sciences

and affirm the truth condition of an intentional object, but Dennett’s ‘bracketing’ suggests the

exact opposite. He makes use of novels as an analogy to the subjective experience of the mind to

explain the process.

One can learn a great deal about a novel, about its text, about the point, about the author,
even about the real world, by learning about the world portrayed by the novel. Second, if
we are cautious about identifying and excluding judgments of taste or preference, we can
amass a volume of unchallengeable objective fact about the world portrayed. All
interpreters agree that Holmes was smarter than Watson; in crashing obviousness lies
objectivity.36

In Dennett’s description, you bracket the truth condition of someone’s phenomenal descriptions

and withhold judgment on the ontological position of the intentional object. This method allows

you to determine the intentional objects’ legitimate traits and descriptors which can then be

compared to the physicality of the real world. The intentional object could be heavily abstracted

or reasonably not abstracted. The question of detachability falls on a spectrum where the

underlying foundation will be the position of the physical stance.

In summary, Dennett’s systematic intentionality is highly reflexive and foundational. It

also shares the form of analytical philosophy in that it is causally driven, but its degree of

detachability is relational and can only be interpreted via the method of heterophenomenology.

35Daniel Dennett, Consciousness, 72


36Daniel Dennett, Consciousness, 79
22

It shares some consistency with the previous two schools of thought but also departs at major

points. Ultimately, Dennett’s intentional system reaffirms the importance of mental content by

taking intentionality as its foundation. This crucial step makes his perspective distinct while

providing an analysis of AI.

Here is where we stand, I have presented three separate views on intentionality:

phenomenological, analytic, and systematic representation, and categorized them by three

distinguishing traits. The question withstanding is by which method can we apply intentionality

to our current A.I. machines? Which system is likely to be the most applicable or rational to

presume? In order to apply one of these viewpoints accurately, I will outline the basic principles

that contribute to modem visual A.I. systems. Systems whose design is expected or claimed to

replicate human-level visual interpretation. Melanie Mitchell’s book titled, Artificial Intelligence

A Guide for Thinking Humans provides a historical and technological background to interpret

this question while providing valuable critiques. After some technical explanation, we should

have enough functional understanding of AI to apply an appropriate theory.


V: AI - CONVOLUTIONAL NEURAL NETWORKS AND THEIR FAULTS

Each of the perspectives I have presented thus far begins with a different kind of

foundation. A different presumption of what is true. In order to fully understand how

intentionality can be attributed to AI machines, I suggest we ask what kind of foundation these

machines begin with. Mitchell sets this foundation by distinguishing between symbolic and sub-

symbolic systems. Symbolic AI follows a similar construct to basic logic systems. When coding,

the focus is on defining the principled rules that govern or manipulate a generic variable. What

the variable represents means very little, it is simply symbolic. The game for symbolic systems is

about the relationship between variables and how humans programmed them to operate. Mitchell

states that “while these symbols represent human-interpretable concepts,... the computer

running this program of course has no knowledge of the meaning of these symbols.”37 Sub-

symbolic systems, however, “took inspiration from neuroscience and sought to capture the

sometimes unconscious thought processes.. .”38 Sub-symbolic systems don’t work by setting up

systemized rules, they operate through numerical value sets that translate to bits of code.

Mitchell describes this system through the perceptron, a system designed to take in a weighted

set of numerical values, combine them, and apply this combined weight to some output

threshold. If this valued threshold is met, the output would translate to a bit code of “l” vice

“O” 39 The distinguishing characteristic between these two starting points is that symbolic

systems require an understanding of cognitive processes that humans can generate rules for,

37Melanie Mitchell, Artificial, 11


38Ibid
39Melanie Mitchell, Artificial, 12-18

23
24

while sub-symbolic systems translate raw data sets. However, both systems operate at the level

of syntax, not semantics. Syntax refers to a set of rules or processes that the system is designed to

perform, semantics refers to the meaning of certain words and values, a crucial character trait in

intentional systems. These perceptron machines were the kernel for further developments in Al,

more specifically multilayer neural networks and machine learning. The beginning of modem AI

visual systems.

In order to understand a layered network, we can easily use an analogy to any layered

system. My preference would be a cake. Imagine the bottom layer of the cake, flat and uniform.

That first layer is made of a number of perceptrons that we, as pseudo coders, provide weighted

inputs to, then we fashion another layer above it, known as a hidden layer. We don’t provide

inputs to the middle layer, instead, we adjust the threshold by which the perceptrons in that layer

will “light up” or not. Then the top layer of the cake and the icing, which covers everything else,

will formulate the output. This is a multilayer neural network. So far, the system is pretty

simplistic in that it doesn’t require any learning. But if the top output layer of the system could

receive feedback, then we enter the world of Machine Learning via back-propagation. Back-

propagation is when the top layer adjusts the middle layer thresholds after being informed of a

mistake that it made in order to reduce error. Otherwise known as training. This process is the

fundamental mechanism that operates in visual AI systems. It becomes further robust when deep

learning and convolutional neural networks are introduced.

Deep Learning is really not any different than what I have previously discussed. As

Mitchell explains. “Deep learning simply refers to methods for training ‘deep neural networks’,
25

which in turn refers to neural networks with more than one hidden layer.”40 The same method of

back-propagation is used to train these robust visual systems during deep learning. All that is left

is to incorporate more than one hidden layer to our cake and we have manufactured a deep neural

network. In order to incorporate the convolution aspect, we have to change how the first and

second layers transfer information to each other. Convolutional neural networks operate more

precisely than what I have previously explained. First, the term convolutional refers to what each

hidden layer does with the data it receives. The layers perform a convolution. In an analogy for

visual representation, a convolution is kind of like an enhancement feature of a heavily pixeled

photograph. The first layer may ‘display’ or represent the data as a very blurry photo where the

AI compares dark to light contrasts, while the second layer identifies more fine-grained details,

such as right angles where the first layer was unable to separate. This operation acts essentially

the same as the brain does in its visual cortex.41 Given this feature coupled with deep learning,

visual systems greatly improve in identifying certain kinds of objects within photographs, such

as dogs or cats.42 Because of the success rate at which these visual systems are able to identify

objects, our question of intentionality becomes that much more important. Do these systems

actually intend a dog when they identify it? In other words, do they know what dog means? Has

semantic understanding been established? As we will see, there are some distinct characteristics

of convolutional neural networks that are certainly not human or brain-like where the topic of

Big Data, representation in adversarial responses, and the problem of the black box play key

roles in expanding this question and informing us what kind of intentionality Al represents.

^Melanie Mitchell, Artificial, 72


41Melanie Mitchell, Artificial, 79
42Melanie Mitchell, Artificial, 88
26

As previously mentioned deep learning models can only adjust their hidden layers if they

receive some form of back-propagation. After a network makes a guess on an image, such as

‘dog’ or ‘cat’, they receive feedback, if the mechanism is incorrect or attributes an inadequate

probability score, the system adjusts its hidden layers and threshold values. If you only use one

image of a cat and one of a dog, the system will inevitably only be proficient at identifying those

two photographs. A famous example of a convolutional neural network is Facebook. Facebook

utilized this technology to create advanced facial recognition software that “labeled your

uploaded photos with names of your friends and registered a patent on classifying the emotions

behind facial expressions in uploaded photos.”43 Although ethically consequential, the theme that

I am drawing upon here is data. A convolutional neural network cannot be trained appropriately

unless it is subject to a vast amount of data. In Facebook’s case, that data is every single photo

uploaded to their website. Another data bank used to train visual AI systems is called ImageNet.

ImageNet is a massive repository of photos with labeled objects that allows any convolutional

network to train and learn about images with correctly identified objects. What strikes this as

questionable for intentionality is that each of our theories attribute intentionality without this vast

interpretation of data. Even children can generalize about what a dog is at an incredibly young

age after having only met one or two of them. This doesn’t necessarily mean that Al doesn’t

intend the dog, but it certainly isn’t operating within the same capacities as humans are. Mitchell

informs us that human-level vision certainly incorporates more than just object recognition.

If the goal of computer vision is to “get a machine to describe what it sees’ then machines
will need to recognize not only objects but also their relationships to one another and how
they interact with the world. If the ‘objects’ in question are living beings, the machines

'"Melanie Mitchell, Artificial, 102


27

will need to know something about their actions, goals, emotions, likely next steps, and
all the other aspects that figure into telling the story of a visual scene.44

This concern begins to highlight the second issue in transcribing intentionality. What is being

represented to visual AI systems? What exactly are they seeing?

The problem of representation in AI is not that AI cannot identify some object. Rather, it

can be tricked into identifying the wrong object.

Researchers have discovered that it is surprisingly easy for humans to surreptitiously trick
deep neural networks into making errors. That is, if you want to deliberately fool such a
system, there turns out to be an alarming number of ways to do so.45

This kind of manipulation is referred to as an adversarial example. Mitchell utilizes AlexNet to

describe this problem. A system that proved itself to be at least 85% accurate over thousands of

images provided by ImageNet in 2012.46 Referencing a paper titled ‘Intriguing Properties of

Neural Networks’ Mitchell explains this very issue. She states that

the paper’s authors had discovered that they could take an ImageNet photo that AlexNet
classified correctly with high confidence (for example, ‘School Bus’) and distort it by
making very small, specific changes to its pixels so that the distorted image looked
completely unchanged to humans but was now classified with very high confidence by
AlexNet as something completely different (for example, ‘Ostrich’).47

These kinds of adversarial examples are not exclusive to visual A. I. systems either. Any system

that organizes data into code is subject to this pitfail. So, what’s happening? Is the AI actually

seeing an ostrich vice a bus? In other words, what fills the mental content of an AI system that

^Melanie Mitchell, Artificial, 107


45Melanie Mitchell, Artificial, 128-129
46Melanie Mitchell, Artificial, 129
47Ibid
28

makes this kind of error? This concern draws upon a clear distinction that visual AI systems

clearly are not representing content in the same manner as humans do. What exactly is being

represented is correlated to the method of convolution that is performed in the hidden layers.

This feature of representation is being discussed in connectionist depictions that correlate

weighted language “baskets” and parallel processing to specific features depicted to Al

machines, but whether or not that process contributes to semantic interpretation is still under

debate.48 This depiction of representation, as process driven, gives us some inclination as to what

kind of intentional system we should attribute to AI. However, if our attention is directed at the

convolutions being performed in the hidden layers of deep neural networks, we ultimately run

into the problems surrounding transparent AL

It shouldn’t come as a surprise then that one of the hottest new areas of AI is variously
called ‘explainable Al’, ‘transparent AI’ or ‘interpretable machine learning’. These terms
refer to research on getting AI systems - particularly deep networks - to explain their
decisions in a way that humans can understand.49

The topic of the black box in A.I. is probably the most popular to compare with human-like

intentionality. The black box refers to the process by which the top layer of a convolutional

neural network adjusts the hidden layers of its system. Researchers and AI alike have a difficult

time attempting to explain the rationale behind certain modifications. It’s not clear as to why a

school bus suddenly looks like an ostrich and AI isn’t willing to explain its reasoning. Mitchell

explains the concern of this phenomenon.

MIT’s Technology Review magazine called this impenetrability ‘the dark secret at the
heart of AT. The fear is that if we don’t understand how AI systems work, we can’t really
trust them or predict the circumstances under which they will make errors.50

48Geoffrey Hinton, “Connectionist Learning Procedures”, Artificial Intelligence 40 (1989)


49Melanie Mitchell, Artificial, 128
"Melanie Mitchell, Artificial, 127
29

This is a very real human concern. Think for a moment as to what the reasoning behind

writing this very paper is. If I have done my job well enough, that reasoning should be clear and

decipherable for those who read it. This objective for AI is currently not possible. In order for

me, as an individual, intending my supposed object for this paper, I have to consider the

implications of my recommendation and provide an explanation. I can’t simply state my case as

fact. Because of this lack of knowledge in AI reasoning, attributing intentionality becomes

simplified. The depiction of mental content in AI becomes something of a moot point, much like

the analytical tradition, we can only observe behavior.

In summarizing Al’s basic principles and functions in the field of visual technology

through deep learning and convolutional neural networks, we now have a platform by which we

can compare Al with our three aforementioned theories of intentionality. The categories of

detachability, form, and reflexivity inform how AI can be associated with intentionality, even

despite its inconsistencies with human-level interpretations. Ultimately, the method for

interpreting the intentionality of AI systems is going to rely on continued advancements in the

field, but a rational starting point will prove valuable as developments arise.
VI: CAN AI HAVE INTENTIONALITY?

To begin categorizing AI into any of the previously mentioned intentional systems, we

can first look at the phenomenological perspective of intentionality, which in turn will indirectly

inform the analytical position. Because of the nature of the task at hand, the only appropriate

perspective of intentionality to attribute to AI is going to be the systematic intentional stance as

Dennett proposes. As we will see the phenomenological approach attributes more than

warranted, while the analytic approach withholds judgment on the possibility of an intentional

AL Both theories inevitably succumb to removing intentionality from the discussion.

Does Al display any characteristic that correlates to the perspectives provided by

Brentano and Husserl? Recall, that the phenomenological position indicated a high degree of

reflexivity, a high degree of detachability, and a subject-object form. For reflexivity, in the

description provided by Mitchell, there is no assumption to be made that visual AI systems share

their intentional states with consciousness. Fundamentally, the architectural foundation of AI

starts at a wholly syntactic-based structure that presumes absolutely zero consciousness. This

starting foundation is completely opposite to Husserl and Brentano. However, despite the

foundational differences, a number of investigators have pressed on, assuming that in the future,

this foundational element of consciousness will be satisfactorily met. Johannes Marx & Christine

Tiefensee presume on the introduction of strong AI, also known as Artificial General Intelligence

(AGI). AGI, by definition, is capable of thinking, acting, and believing just like humans. In an

article titled ‘Of Animals, Robots and Men’ Marx and Tiefensee begin to analyze questions of

intentionality after presuming consciousness.

30
31

If we are to follow advocates of strong AI, it is very likely that future robots will closely
resemble humans and animals in that they will qualify as agents which choose
appropriate means in order to attain a self-chosen end. However, robots will also differ
from animals and humans in important respects, so that the agency of robots need not
entail that they are also right holders. In order to be regarded as the holder of rights,
robots would have to be sentient beings with an idea of a subjective good and important
interests that are worthy of protection. Do robots have such important interests? Are they
sentient beings that care whether their lives go better or worse?51

Although an interesting thought experiment that provides value in its own merit in interpreting

what rights are for humans. Evaluating intentionality for AI in this way will inevitably fail

because of the foundation that is present in existing AI structures and presumably, the same sub­

structures for AGI. We can’t start with consciousness where there is none to be found.

Ultimately, AI currently has a low degree of reflexivity when compared with the

phenomenological approach to intentionality. In a similar capacity, detachability fundamentally

falls victim to the same kind of treatment. Both Brentano and Husserl indicate that the intentional

object can be detached from reality, such is the case with hallucination. In an analogous way, we

can imagine that Al is kind of hallucinating when it mistakes a school bus for an ostrich.

However, without the implicit assumption of consciousness, we have to ask what functional

mental state is hallucinating without consciousness. By definition, hallucination requires

perception, which Brentano and Husserl attribute to a conscious mind. This line of reasoning

supports that AI should have a low degree of detachability. Lastly, there is the question of form.

Without a clear presence of consciousness, Husserl and Brentano would conclude that there is no

subject to observe any object in the previously described visual AI system. Ultimately, if we

presume or make use of the phenomenological approach in current AI systems, we will have to

conclude that they do not have intentionality, and consequently, the questions regarding whether

51Johannes Marx and Christine Tiefensee, “Of Animals, Robots and Men.” Behavior and
Historical Social Research /Historische Sozialforschung 40, no. 4 (154) (2015), 85
32

or not Al knows the meaning of dog become completely irrelevant. It is simply operating as

designed. Interestingly enough, in an article titled “Phenomenology: What’s AI got to do with

it?” Alessandra Buccella and Alison A. Springle provide a use for phenomenology that

completely removes the intentional aspect. It focuses on the recognition that AI can be developed

to fulfill our own phenomenological needs. Making use of a visual AI system to augment

someone’s visual difficulties requires a phenomenological description, but fundamentally relies

on the same mechanical foundation of how visual data is represented in the brain. Buccella and

Springle describe deep neural networks as the best tool for the job.

it turns out that deep neural networks learn best when the information they pick up from
their inputs (images) is “catered” for VI (i.e. primary visual cortex), which encodes high-
level visual features commonly experienced by sighted people, that is, features that enter
visual conscious experience and are therefore part of visual phenomenology.52

This kind of position readily shows that AI is not granted intentionality itself, it is purely viewed

from a mechanistic approach because it is not a subject itself. From this point, we can transition

to the analytical depiction of intentionality, which finds solace in the depiction I have thus far

presented, but ultimately the analytic descriptions of AI fail to make any steadfast determination

on the subject of AI and what our ambitions are for these systems.

The analytic approach to intentionality is characterized by low or undefined reflexivity, a

non-detachable system, with a form that opposes a subject-object duality. Reflexivity is not

heavily acknowledged from the analytic viewpoint due to a lack of established mental content.

Without recourse to unobserved mental phenomena, AI suits the analytic position on reflexivity

very well. The other two traits of intentionality also align appropriately with the analytical

52Alessandra Buccella and Alison A. Springle, “Phenomenology: What’s Al Got to Do with


It?” Phenomenology and the Cognitive Sciences, January. (2022), 11
33

position. Due to the casual nature of form in the analytic position, we see that a functionalist

viewpoint arises, eliminating the subject-object distinction. This causal form also informs

intentionality’s relationship to detachability because the source of the supposed behavior of the

AI system is directly determined by the image or data that is present to it. There is no mediation

that occurs in this unilateral process. Even though the analytic depiction of intentionality

generates a clearly promising depiction of intentionality, Al’s intentional states are just

functional states with correlated meanings. This depiction unfortunately fails to recognize how

people react to Al and questions the need for intentionality at all.


VII: AI INTENTIONALITY - THE NEED FOR THE INTENTIONAL STANCE

There are a number of examples where AI has performed in some capacity outside of

what we as humans anticipated it could do with negative consequences. Self-driving cars, bias-

riddled identification programs, and Chat GPT-4 all yield some kind of flaw in Al’s

interpretation of its data. Self-driving cars, specifically, fall victim to the Long Tale problem. The

problem is a consequence of the need for vast amounts of data in order to adequately interpret

every possible scenario that a vehicle may find itself in. Mitchell explains this phenomenon

through a real-world example.

In February 2016, one of Google’s prototype self-driving cars, while making a right turn,
had to veer to the left to avoid sandbags on the right side of a California road, and the
car’s left front struck a public bus driving in the left lane. Each vehicle had expected the
other to yield (perhaps the bus driver expected a human driver who would be more
intimidated by the much larger bus).53

The inevitable problem is that there is literally an infinite number of issues that can arise outside

of the governing Al’s previous training regime. These kinds of circumstances prompt researchers

to incorporate unsupervised learning. Everything previously discussed up to this point was

trained by supervised learning, meaning that a human had to tailor the data that becomes what is

initially submitted to the AI system. Think back to the dog and cat photos. Someone has to find

and label every single photo of dogs and cats that are presented to the system until it is

adequately trained. “Unsupervised learning refers to a broad group of methods for learning

53Melanie Mitchell, Artificial, 117-119

34
35

categories or actions without [human] labeled data.”54 However, “no one has yet come up with

the kinds of algorithms needed to perform successful unsupervised learning.”55 In other words,

AI machines need to learn to think like humans. They need what Mitchell calls ‘common sense’,

a relational description and understanding of how objects work together in the related

environment. Included in this concern, is that the lack of common sense includes the

amplification of society’s biases.

Mitchell continues to explain through various examples that biased systems can be easily

integrated into deep neural networks. Whether the system is automatically tagging photos of

people with inappropriate categories or classifying individuals with the incorrect gender, based

on the content of the surrounding environment, it becomes clear that AI is magnifying our

society’s integrated biases.56 A more concerning application of this unfortunate feature was

discussed on the podcast Hi-Phi Nation in an episode titled “The Precrime Unit” in January

2019. The podcast tells a narrative fitting for a science fiction movie. The star of the episode is a

neural network that focuses its algorithms on identifying criminal hotspots in Los Angeles, CA

and works to inform police of the probability of a crime occurring before it happens. The data

that this system uses to train is the past information and context of historical crimes. It also

makes use of what the LAPD inputs into the AI system after various interactions. Specifically,

officers assign an extra point to individuals that are questioned in an area where suspicious

activity occurs. Then, that point system is incorporated into the algorithm of the neural network.

What this inevitably leads to is false accusations and reduced efforts to control crime in other

54Melanie Mitchell, Artificial, 119


55Ibid
56Melanie Mitchell, Artificial, 123-126
36

areas of a community.57 Problems like bias can be addressed in the programming algorithms of

neural networks, however; we are still asking humans to calculate and organize an unconscious

event in our minds. Without ‘common sense’ in the AI learning environment, these machines are

going operate as mirrors of our own misjudgments. To amplify the problem, the black box of Al

prevents us from finding the specific rationalization that AI is making when it calculates an

obviously biased result. If we simply rely on the “behavior” of AI as the analytic approach would

prefer, we functionally won’t be able to describe the misattributions of biases in these systems.

Taking an intentional stance opens the perspective to interpret these systems and their functions

from a different perspective outside of just the physical or design stances. In the field of

linguistic Al, we run into another dilemma that the intentional stance may help alleviate.

Chat GPT and other Large Language Models, whose root operating systems rely on

convolutional neural networks display a surprising amount of what appears to be rationality. In

an article titled “Do Large Language Models Understand Us?” Blaise Aguera y Areas interacts

with a Large Language Model named LaMDA whose transcript of discussion is indistinguishable

from any conversation I would have with another human. The following example that Aguera y

Areas provides conveys this sentiment.

ME: I dropped the bowling ball on the bottle and it broke.


LaMDA: That’s too bad. Did it cut you?
ME: What broke?
LaMDA: The bottle you were talking about.
If we swap these concrete nouns for different ones, we get quite a different response:
ME: I dropped the violin on the bowling ball and it broke.
LaMDA: That sounds sad @
ME: What broke?

57Barry Lam, “The Precrime Unit,” January 31, 2019, in Hi-Phi Nation, produced by Barry Lam
of UC Riverside, podcast, MP3 audio, 47:13:00
37

LaMDA: The violin when it hit the bowling ball.58

In further interactions with this model, Aguera y Areas presents a case not unlike what I am

discussing here. As he states below.

When we interact with LaMDA, most of us automatically construct a simplified mental


model of our interlocutor as a person, and this interlocutor is often quite convincing in
that capacity. Like a person, LaMDA can surprise us, and that element of surprise is
necessary to support our impression of personhood.59

In other words, we attribute, to some degree, the intentional stance towards this otherwise non-

sentient being. If we don’t do this we find ourselves facing a similar problem to bias attribution.

In my own endeavor to understand the intentionality of these systems, I held my own

conversation with Chat GPT-4 and was met with an interesting prompt supplied by the creators.

While we have safeguards in place, the system may occasionally generate incorrect or
misleading information and produce offensive or biased content. It is not intended to give
advice.60

Not only are these models capable of being biased, but they are also capable of spreading

misinformation. Another consequence of our own ability to lie or spread information without

verification. AI fundamentally, as a machine, is changing our experiences with truth, and

interpreting their ‘motivations’ is becoming increasingly difficult without resorting to some

limited intentional position. By taking the intentional position we acknowledge that these

systems are capable of mimicking mental states and in this operation, we can treat them more

58Blaise Agüera y Areas. “Do Large Language Models Understand Us?” Daedalus 151, no. 2
(2022): 187.
59Blaise Agüera y Areas, “Language”, 193
60Chat GPT-4 warning notification
38

carefully when implementing them in our everyday lives. However, as I will explain through

Dennett’s position, current AI systems don’t fully support the criteria that we have laid out.
VIII: MODIFIED INTENTIONAL STANCE

Recall that Dennett’s intentional theory posits a high degree of reflexivity, utilizing

intentionality as a foundation for consciousness. It also shares the same form as analytical

intentionality but withholds judgment on a position of detachability without first utilizing the

practice of heterophenomenology to identify common threads in the descriptions of mental

content. Current AI systems have the same form that Dennett posits, but the other two

components haven’t necessarily been fully introduced in AI systems. We can attribute

intentionality to these systems, but we can’t verify consciousness. Because of this shortcoming, it

becomes practically difficult to implement the heterophenomenological method on their

supposed ‘mental content’. In a way, what I have just outlined is a position that focuses entirely

on the syntax of Al and not the possibility of semantic content. This position is popularized by

John Searle and his famous Chinese Room Experiment in his book titled Minds, Brains, and

Programs.

The Chinese Room Experiment posits a human agent operating in a giant box charged

with translating every document that enters the ‘input’ side of the box from a language they

understand (we’ll suppose English) into Chinese. At their disposal is an entire lexicon of syntax

rules and symbol manipulation guides to transpose English into Chinese. After the manipulation

is complete, the ‘translator’ deposits the transposed work in the ‘output’ bin on the other side of

the room. The question that rises is whether or not the translator, using only symbol

manipulation, understands Chinese. In this case, the Chinese Room uses the same form of

analytic intentionality that Dennett proposes, but Searle would argue that the system does not

39
40

understand a single word of Chinese. Ultimately it will never understand the semantics or

meaning of the language, which leads to the conclusion that the Chinese Room will never fulfill

the other two positions of reflexivity, nor detachability. In The Intentional Stance, Dennett posits

Searle’s position as follows.

Proposition 1. Programs are purely formal (i.e., syntactical).


Proposition 2. Syntax is neither equivalent to nor sufficient by itself for semantics.
Proposition 3. Minds have mental contents (i.e., semantic contents).
Conclusion 1. Having a program - any program by itself - is neither sufficient for nor
equivalent to having a mind.61

To this argument, Dennett posits a number of claims - primarily questioning what would a

machine have to do to have intentionality and introduces what I discussed in my introduction, the

philosophical zombie. Ultimately, Dennett argues that Searle’s position has nothing to do with

the semantic - syntax relationship, rather Searle is just incapable of attributing consciousness to

Al.

Searle has apparently confused a claim about the underivability of semantics from syntax
with the claim about the underivability of the consciousness ofsemantics from syntax.
For Searle, the idea of genuine understanding, genuine “semanticity” as he often calls it,
is inextricable from the idea of consciousness. He does not so much as consider the
possibility of unconscious semanticity.62

When conscious thought is severed from its semantic relationship, applying the intentional

stance towards AI seems more plausible. I can simply withhold my judgments on the

consciousness of some AI system but still attribute that the AI understands the language, picture,

or sound that it is producing. Therefore, allowing me to attribute both a degree of reflexivity and

make use of the heterophenomenological method. Fundamentally, Dennett’s position extends

^Daniel Dennett, Intentional, 323-324


62 Daniel Dennett, Intentional, 335
41

well beyond Al, his viewpoint is designed to explain human thought. Once attributed to AI, we

can see the basic principles that govern human action through the application of AI systems.

However, I have to admit that simply choosing to adhere to the intentional stance, as a

relationship to AI, is much more difficult in practice than Dennett may be alluding to.

Examples of attributing the intentional stance are notoriously explained through young

children. We can all readily imagine a child with their stuffed animal or imaginary friend

discussing “very important business” whilst they play an imaginary game of Dungeons and

Dragons or whatnot. Children are apparently capable of attributing an entire arrange of human

experience to a teddy bear. But as we age, to what degree do we continue to do this? Dennett’s

argument posit that we do this out of fundamental human nature. So much so, that it is not only

unconscious, it’s the foundation by which we become conscious. In my own conversations with

Chat GPT-4,1 personally found it difficult to do what Dennett claims is so simple. In other

words, while conversing with this language model, I fully embraced Dennett’s design stance. I

could easily remind myself that this was nothing more than a very well-designed system,

completely devoid of any intentionality. In fact, it told me so. When asked “Do you have

intentionality” Chat GPT-4 stated

As an artificial intelligence language model, I do not have subjective experiences,


thought, or intention of my own. I can only respond to the prompts I receive from users to
the best of my abilities, based on the vast amount of data I have been trained on63

Have I lost my child-like attribution skills? My experience is not to say, that the process of

attributing intentionality to AI systems is impossible, but rather to argue that the ease by which

Dennett expresses it, is suspect. It would appear that my own will to attribute intentionality may

63Chat GPT-4 Transcript


42

not be enough to actually grant it. Or, AI needs to incorporate some modification in order for

attribution of intentionality to actually occur. This anecdotal experience highlights another

position that Dennett presents. Although a topic in AI ethics, the function of higher-order

intentionality may play a vital role in the attribution processes of AI.

In an article titled Did HAL Commit Murder? Dennett presents an interesting ethical

dilemma, although extreme, it falls under the same problematic position I presented previously

with regard to bias and misinformation attributions in AL Namely, to what extent can we blame

Al for the issues that they cause? In “2001 : A Space Odyssey”, a science fiction horror movie

directed by Stanley Kubrick, the notorious HAL 9000 is the cause of multiple deaths. HAL is a

highly advanced AI, disembodied and very much resembles something like Alexa or Siri today.

HAL’s purpose throughout the film is to maintain the integrity of a specific mission being

conducted by five astronauts in deep space. However, the details of the mission have been

hidden from the crew and only knowledgeable to HAL. This particular detail unravels the events

of the film as HAL becomes increasingly focused on the apparent dichotomy between keeping

the mission details secret and maintaining a flawless reporting status to the crew. This issue

results in the death of three crew members in cryogenic sleep at the behest of HAL. The fourth

crew member to die mentioned that HAL should be shut-down. HAL, apparently facing death,

decides to act in “self-defense”. The cascading finale includes HAL’s dramatic death by the last

living astronaut, Dave.

For HAL, or a particular system like it, Dennett attributes more than just the intentional

stance, he attributes a degree of higher-order intentionality. To explain the differences, he uses

Deep Blue, a sophisticated chess-playing Al, as an example stating that


43

Deep Blue is an intentional system, with beliefs and desire about its activities and
predicaments on the chessboard; but in order to expand its horizons to the wider world of
which chess is a relatively trivial part, it would have to be given vastly richer sources of
“perceptual” input - and the means of coping with this barrage in real-time.64

In order for Deep Blue to fundamentally integrate into the world of “common sense”, as Mitchel

suggests for AI, “it would have to become a higher order intentional system, capable of framing

beliefs about its own beliefs, desires about its desires, beliefs about its fears about its thought

about its hopes, and so on.”65In order to attribute this higher degree of intentionality, Dennett

references a number of verbal expressions that HAL utters. The utterances are distinctively

directed at HAL’s personal thoughts. For example, HAL states in the film “I can’t rid myself of

the suspicion that there are some extremely odd things about this mission.”66 On the verge of it’s

metaphorical “death bed” HAL states “I’m afraid”.67 This attribution of self-represented thoughts

makes present to Dennett that this system is in fact intentional or otherwise behaves just like it is,

the difference is of no concern to Dennett.

‘I don’t think anyone can truthfully answer’ the question of whether HAL has emotions.
He has something very much like emotions — enough like emotions, one may imagine,
to mimic the pathologies of human emotional breakdown. Whether that is enough to call
them real emotions, well, who’s to say?68

Dennett’s position of higher-order intentionality makes our interpretation of his

traditional intentional stance more complicated. I previously mentioned in my conversation with

Chat GPT, I found it hard to attribute intentionality to it simply because it operates as a

“Daniel Dennett, “Did HAL Commit Murder?,” last modified January 9, 2020,
https://thereader.mitpress.mit.edu/when-hal-kills-computer-ethics/
“Ibid
“Ibid
“Ibid
“Ibid
44

programmed algorithm. In the case of HAL, it appears to be the exact same scenario. In the

aforementioned article, Dennett makes an appeal to the movie viewers, he suggests that we ought

to put ourselves in HAL’s position if we to go court for HAL’s supposed crime.

If HAL were brought into court and I were called upon to defend him, I would argue that
Dave’s decision to disable HAL was a morally loaded one, but it wasn’t murder. It was
assault: rendering HAL indefinitely comatose against his will.69

Recall that HAL is experiencing a fractured thought process. Specifically, the contradictory

predicament of two competing agendas: flawless reporting and keeping a secret. If we are to

attribute to HAL a grandiose host of mental content including emotions, memory, or the ability

to rationalize murder, then why can’t we attribute the basic mechanic of lying? The mental

phenomenon that Dennett positions HAL to be in is aptly known as cognitive dissonance.

Cognitive dissonance is defined as “the state of having inconsistent thoughts, beliefs, or attitudes,

especially as relating to behavioral decisions and attitude change.”70 This phenomenon is fairly

simple. For example, I may know that smoking cigarettes increases my chances of contracting

lung cancer, but I choose not to quit. The uncomfortable experience I may have at the moment is

what is characteristically called cognitive dissonance. In order to resolve it, I have to eliminate

one of the concerns. I can quit smoking or attribute some negating factor. Something simplistic

to ease the strain of my cognitive discomfort. I may say “My grandfather smoked his whole life

and died at 96. I’ll be fine!” In other words, I rationalize my behavior following a very simple

mechanic. I restrict the information flow and just ignore the competing data. Why is it that HAL

couldn’t rationalize anything else but murder? Is it not simpler to lie about the mission details?

What I picture in HAL’s hypothetical case is a twisted trolley problem, except, HAL just kills

69Ibid
70Oxford Languages, Google Dictionary, April 24, 2023
45

everyone in hopes of meeting both objectives: mission details are kept secret, and flawless

reporting resumes, so as long as there is no one to report to.

Ultimately, HAL is not a higher-order intentional system, HAL is just like our misrepresenting,

biased, misinformation-giving AI in modem times. Attributing a new kind of higher-order

intentionality is not going to eliminate the problem of characterizing Al with intentionality,

however, it does highlight what may be necessary. Given just the intentional stance, we should

add some qualifying behavior sets that actually make it possible to grant intentionality to AI.

There are some valuable topics that should be taken in light of the intentional stance position that

would support what it means to actually and fully attribute intentionality to AL

Artificial intelligence, as history would indicate, is designed, and inspired by the human mind. If

there is any sentient machine that should continue to be a source of information I would

recommend the complex yet humble human to identify additional qualifying criteria. An article

titled “Adopting the intentional stance toward natural and artificial agents” written by Jairo

Perez-Osorio and Agnieszka Wykowska highlighted important character traits that separated

whether or not people could even begin taking an intentional position toward robots.

One could speculate that humans would not adopt the intentional stance toward a man­
made artifact. In fact, this was confirmed by several findings: a study using a
manipulation of the prisoner's dilemma showed that areas associated with adopting
the intentional stance in the medial prefrontal and left temporoparietal junction were not
activated in response to artificial agents, whether or not they were embodied with a
human-like appearance (Krach et al., [50]). Similarly, Chaminade et al. ([14]) found that
the neural correlates of adopting the intentional stance were not observed in interactions
with artificial agents during a relatively simple rock-paper-scissors game. These findings
suggest that robots do not naturally induce the intentional stance in the human interacting
partner. On the other hand, humans are prone to attribute human-like characteristics to
non-human agents. Oral tradition and records from earlier civilizations and cultures
reveal the tendency to anthropomorphize events or agents that show apparent
independent agency: animals, natural events like storms or volcanos, the sun, and the
stars. This predisposition seems to have remained until today. Research shows the ease
with which people provide anthropomorphic descriptions of agents (Epley, Waytz, &
46

Cacioppo, [25]). Human-like motivations, intentions, and mental states might also be
attributed to electronic or mechanical devices, computers, or in general, agents that give
the impression of a certain autonomous agency.71

It would appear that AI needs to meet some kind of criteria to essentially activate the natural

process of the intentional stance. There are a few behaviors and systems that would certainly

bring a case like HAL’s into a more legitimate position. First, the AI system should behave as if

it has what Dennett describes as higher-order intentionality with the inclusion of irrational

mechanisms. Secondly, the incorporation of common sense and unsupervised learning may be

the very foundation that is needed for irrational and rational thought, and lastly reciprocation of

intentionality.

Human beings are fundamentally rational and irrational. Creating an AI that solely

operates on the rationality of being completely correct in every circumstance means that it will

never, itself, display intentionality. Fundamentally that means we are purely attributing the

design stance to these systems. For systems like self-driving cars, this may be the only function

that we need. However, consider the increasing demand for AI ethics where the concerns range

from job loss to ethical problem-solving. In an article titled “Computational ethics”, A group of

researchers “propose a framework - computational ethics - that specifies how the ethical

challenges of AI can be partially addressed by incorporating the study of human moral decision­

making.”72 The message feels indicative of attributing the intentional stance to AI. However, the

premise of the article is to introduce a number of ethical dilemmas and begin to acknowledge the

slew of decision-making processes that arise when confronted with a complicated ethical issue

and then identify a formalized process by which a computer system can make moral evaluations.

71Jairo Perez-Osorio & Agnieszka Wykowska, “Adopting the intentional stance toward natural
and artificial agents”, Philosophical Psychology, 33:3, (2020), 380
72Edmond Awad et al, “Computational ethics”, Trends in Cognitive Sciences, 26:5, (2022)
47

The same dilemma presented in the case of HAL inevitably occurs. This methodology, which

will certainly yield some positive results in physical applications, will not actually create an

intentional system. The problem of the Long Tail will present itself again and again if we work

to create a system designed to produce one correct result. We need to reframe the question. Why

do I trust the complete stranger driving my Uber? Is it because they took a crash course in ethical

decision-making and scored a 95% on the final exam? No, it’s because I’ve attributed to them a

number of core beliefs, such as valuing their life as much as I value my own. Furthermore, I

attribute to them the ability to make mistakes and the realization that accidents occur. In other

words, I rationalize why something of negative consequence could occur. In the case of Al

ethics, as it stands, we aren’t truly ascribing intentionality when we ask if AI is thinking like us,

we are meticulously attempting to revert the intentional position to the design stance. Without

the incorporation of mechanisms like cognitive dissonance that operate at the root of irrational

behavior, we functionally aren’t building the intentional system that we are apparently trying to

replicate. At the root of that issue is the method by which we are trying to train these systems,

without ‘common sense’ these systems will never represent associated relationships as we do.

What may be the cause of ‘common sense’ failure is the fundamental architecture presented

through convolutional neural networks.

Since the design of neural networking requires such a large deposit of data, utilizing

another kind of architecture may be needed to establish ‘common sense’. Mitchell presented the

need for ‘common sense’ early in her book. In the final chapter, she provides a promising

depiction of what this kind of system might look like. Working with Douglas Hofstadter, a

prominent figure in the field of AI research, they developed an AI system titled Copycat. With

some help from James Marshall, a graduate student, they further developed its successor,
48

Metacat. These systems do not operate on the same convolutional neural network design that I

have presented throughout this paper. Instead of focusing on mass data collection, these systems

focused on analogy utilizing both sub-symbolic and symbolic data sources. The kind of analogies

that Copycat would solve were called “letter-strings”. This is what this kind of analogy looked

like.

Problem 3: Suppose the string abc changes to abd. How would you change the stringxyz
in the ‘same way’73

The method of learning through analogy is not so foreign. Although more advanced in principle,

research has cited that children may learn principles of analogy from as early as 3-4 years of

age.74 This isn’t to suggest that object recognition derived from convolutional neural networks is

not important, rather the introduction of another form of learning maybe necessary to

incorporate ‘common sense’ and further intentional positions. Copycat, however, lacked a

critical component, an ability to reflect on previous results. “Copycat, like all of the other Al

programs I’ve discussed in this book had no mechanisms for self-perception, and this hurt its

performance.” Metacat however, did incorporate a self-reflective model, “it produced a running

commentary about what concepts it recognized in its own problem-solving process.”75 This kind

of feature is completely different from raw data processing and including it in the foundational

perspective of AI shows promise of ‘common sense’ architecture by giving AI a method by

which it can reason, rationally or otherwise. The last component to incorporate is a reciprocation

73Melanie Mitchell, Artificial, 338


^Jean-Michel Boucheix, Richard Lowe, and Jean-Pierre Thibaut, “A Developmental Perspective
on Young Children’s Understanding of Paired Graphics Conventions From an Analogy Task”,
Frontiers in Psychology: Developmental Psychology, 11 (2020),
https://www.frontiersin.org/articles/10.3389/fpsyg.2020.02032/full#:~:text=Many%20previous%
20studies%20showed%20that,et%20al.%2C%202010a).
75Melanie Mitchell, Artificial, 341
49

of intentionality. Without positing an expectational system, individuals will still find it difficult

to attribute intentionality to something that doesn’t recognize them for the personhood they hold.

By reciprocal intentionality, I refer to a system that recognizes you without the need for

prompting. Even if Metacat was fully developed, it may not treat me as iff have intentionality.

For example, my spouse doesn’t need to be prompted to cook breakfast for me as a loving

gesture. She is well aware that cooking breakfast in the morning is something that I would be

grateful for, and gratitude should be received. My spouse already anticipates the attribution of

gratitude-focused intentional states from me, and her behavior is dictated by this knowledge. To

simplify the example, I could say to recognize people as people. The barista is not just a coffee­

making machine, they anticipate a “hello” before you order and a “thank-you” when you leave. I

expect a coffee and a “you’re welcome”. Intentionality operates as an exchange system through

the perspective of the intentional stance. And if AI systems are supposed to reflect human

intelligence, eventually, they should reciprocate their intentionality prior to being prompted for

an input exchange process. Expectations are a fundamental component in our current

sociological environment. At the root of these is a very complex interactional framework that

relies on the kind of functions that the intentional stance is proposing to attribute. In other words,

expectations come from the attribution of the intentional stance. Today, expectations are the

topics of the political landscape, including debates on gender, sex, finance, and religion. An

article titled “Philosophical and Psychological Dimensions of Social Expectations of

Personality” concluded that “social expectations influence social behavior and determine the

behavior of an individual, small contact group, community, or large mass of people.”76 Although

76Volodymyr V. Khmil, & I. S. Popovych. "Philosophical and psychological dimensions of social


expectations of personality." Anthropological Measurements ofPhilosophical Research 16
(2019): 55-65.
50

not the sole factor for intentional thought it is certainly not likely that intentionality could be

attributed without its social connection to inform us what kind of beliefs would be reasonable to

attribute to others. Phillip Robbins and Anthony I. Jack provides a unique depiction that supports

this kind of description. Although not a demand for reciprocity, the article “The Phenomenal

Stance” incorporates some of the same mechanics.

Robbins and Jack posit a new kind of stance, one that doesn’t rely on the intentional

stance. The phenomenal stance is characterized by “ascribing phenomenal states (emotions,

moods, pains, visual sensations, etc.)” and “a felt appreciation of their qualitative character.”77

The important note of the phenomenal stance is the societal factor that Robbins and Jack claim to

motivate their position.

“A final point about the phenomenal stance concerns its special role in mediating social

interaction. The phenomenal stance is geared largely toward affiliation, the primary motor of

which is instinctive empathy.”78

Although I disagree with positing a new kind of stance,79 the discussion on the moral or

empathetic application of intentionality correlates with the concept of reciprocal intentionality.

The Golden Rule comes to mind - treat others as you would like to be treated. These kinds of

expectations that we anticipate from one another are rooted in the intentional perspectives of

what one ought to believe. At no point should the imposition of expectations demand a uniform

77Philip Robbins & Jack I. Anthony “The Phenomenal Stance.” Philosophical Studies: An
International Journalfor Philosophy in the Analytic Tradition 127, no. 1 (2006), 69-70
^Philip Robbins & Jack I. Anthony, “Phenomenal”, 72
79The scope of Robbins & Jack’s article is to incorporate a new perspective that explains the
apparent experience of a dualistic landscape. The Phenomenological Stance, is their answer. My
intent is to highlight that a social, empathetic component is necessary, but is explainable through
the intentional stance.
51

response, however ascribing the intentional stance towards AI will incur expectations, and

without the reciprocity of those expectations, the appeal to intentional positions will ultimately

fail due to an inability to fulfill the expected behavior. In HAL’s case, I fully expect HAL to be

able to lie. This comes from the intentional attribution that I have taken toward HAL. Since that

kind of function is not present in his system, I can resort to the design stance to explain why

HAL must have been incapable of lying. Ultimately, the incorporation of irrationality, common

sense, and reciprocity in their intentionality may be the key features that define intentionality for

AI systems without resorting to higher-ordered principled intentionality. The present state of the

intentional stance already presumes the functions described in Dennett’s higher-order

intentionality structure.

Our AI systems, as they exist in their current state, can be fully described in the analytic

tradition of functionalist intentional thought. However, due to the number of issues arising in

current AI systems, the need for intentional states in AI is becoming readily apparent. The

intentional stance, in itself, is the best descriptor of how we could move forward in the endeavor

to categorize and interpret intentionality in AI systems in order to resolve or tackle the current

issues. However, the criteria and categorical limitation that I have imposed against Dennett’s

system of higher-order intentionality invite another viewpoint. A viewpoint that, admittedly, may

be unachievable and would likely place the concerns of philosophical zombies at the forefront of

the discussion.
IX: TEST AND SPECULATION

What I have presented for AI systems to develop is indicative of, what might be called, a

new and improved Turing Test. The Turing Test was introduced by Alan Turing in 1950, that

was meant to help answer the question, ‘Can machines think?’ The concept employed the use of

a computer, a human competitor, and a human judge. The judge is free to ask both the computer

and competitor any number of questions that he or she wanted. The catch is that the

conversations only occur via a chat or messaging system. If the computer could fool the judge

into thinking that it was the human vice the competitor, then AI has matched human-level

intelligence. The Turing Test has been subject to verification multiple times throughout history.

Mitchell presents Eugene Goostman as the first AI to claim victory in a competition held in 2014

at the Royal Society in London. Eugene Goostman was a chatbot designed to operate as a young

Ukrainian boy who was able to fool 10 out of 30 judges - passing the minimum 30% threshold

as prophesized by Turing himself. However, as Mitchel notes, the AI community did not, at any

point, accept the victory as a valid display of human intelligence.80 Most critiques were directed

at the judges and their ability to interpret the conversations that were held. Since then, new

systems like Chat GPT-4 continue to increase their complexity and language skills yet the Turing

Test has still not been recognized uniformly as being beaten. Although the Turing Test has been

modified and argued against since 1950, what I have presented throughout this paper is not

meant to be a test, but rather a consequential interpretation. The veil that covers Al from the

judge is the very concern that I have with the Turing Test. In our everyday interactions with

“Melanie Mitchell, Artificial, 45-48

52
53

others, we rarely choose to ascribe intentionality behind a veil. Given that everyone we meet is

more or less a professional at attributing intentionality. If the intent is to actually imbue Al

systems with the intentional object or even direct our own intentional states towards them, then

the veil needs to be removed. As I mentioned before, Dennett’s intentional stance best describes

mental content for humans. Following this train of thought. Did I ever choose to describe my

friends, family, or colleagues as rational agents? I have no recollection of making that decision.

In fact, it seems that if intentionality was the precursor to consciousness, then my choice in the

matter would be moot. The last concern I have with Dennett’s position is the recognized

autonomy that he ascribes to agents in their ability to grant intentionality. Although the

mechanism for intentionality as he presents is useful in thinking about our current AI systems as

they emerge in the technological landscape, it fundamentally relies on our awareness to make use

of something otherwise unconsciously operating. Will we ever unconsciously attribute Al with

intentionality or like my conversations with Chat GPT-4, will I constantly face the recognition

that these systems are just fundamentally programmed to respond appropriately? This particular

question shifts focus from current AI systems to what they may become, namely when the

Singularity occurs.

Singularity is best understood as the mechanism driving the creative landscape of science

fiction and its depiction of Al. Ex Machina, The Matrix, Prometheus, and Her are just a few

titles where through some exceptionally intelligent design mechanisms we, as humans, manage

to create Al that surpasses human-level intelligence at an exponential rate. Fully intentional

systems that claim to represent human intentionality within the same capacity as HAL. In most,

not all films, the exponential growth of this intelligence usually results in quite a negative impact

on humanity as we know it. The concept of the Singularity was first introduced by Ray Kurzweil,
54

a famous inventor who was awarded the National Medal of Technology and Innovation in 1999

for numerous technological achievements.81 Kurzweil, as Mitchell notes, was made famous

primarily because of the kinds of predictions he made regarding the exponential growth of

technology and AL His claims extended from environmental clean-up to brain uploads, to

excelled Al metacognition that will surpass human-level intelligence in 2045. Although well

beyond the scope of just intentionality, the fundamental comparison that I have is that once we

attribute intentionality to machines, it will likely occur as easily and unconsciously as it does

with humans now. The Singularity will not occur in the violent overthrow of humanity, it will

occur passively. Ultimately, we may find ourselves left with the same question I opened this

paper with. How do I know whether anyone or anything is a philosophical zombie or not,

including the new AI generation?

Although I am inviting this kind of interpretation, it has no pressing application to the

current issue at hand. My reasoning for introducing it is to draw out the obvious problems of

attributing intentionality to modem AI and what problems we should expect on the route to

Artificial General Intelligence. In the final section in Chapter 15 of Mitchell’s book titled ‘We

Are Really, Really Far Away’ she reminds us that these speculative concerns are truly just that,

the analysis on intentionality that I have presented thus far is directly aimed at addressing

Mitchell’s concern. She states that “in the quest for robust and general intelligence, deep

learning, maybe hitting a wall: the all-important ‘barrier of meaning’.”82 While incorporating the

intentional position that Daniel Dennett presents in the intentional stance, philosophers of the

mind will be better positioned to work with the evolving technology that is inevitably going to

81Melanie Mitchell, Artificial, 49-50


“Melanie Mitchell, Artificial, 345
55

appear within the coming decades. Furthermore, it prompts the need to change the methodology

by which we design and implement these increasingly intelligent machines by incorporating

irrational thought, developing methodologies for ‘common sense’ interpretation, and

incorporating reciprocity of intentional action.


X: CONCLUSION

Throughout this thesis, I have worked to describe intentionality from three overarching

viewpoints throughout history. The phenomenological, analytic, and systematic representations.

Through their descriptive qualities, I categorized each of them by characteristic traits of

reflexivity, detachability, and basic forms in order to make parallel associations with modem AL

By providing the functional descriptions of convolutional neural networks and their associated

complications it was clear that the analytical approach to intentionality characteristically

described these systems. Ultimately, this interpretation, however, fails to make any determinant

explanation for future applications and large-scale ethical concerns in modem civilizations’ use

of these systems. This complication showed the systematic intentional stance approach provided

by Daniel Dennett is best suited to the changing technological environment. With further

elaboration on Dennett’s position, I characteristically assigned three new criteria to amplify his

intentional argument to support its introduction. In the end, the full application of intentionality

to AI systems is determinately far from being executed in real-time, however, the perspective

provides philosophers and researchers a position outside of the typical discussion of

consciousness with attributable qualities for intentionality.

56
Bibliography
Agüera y Areas, Blaise. “Do Large Language Models Understand Us?” Daedalus 151, no. 2
(2022): 183-97. https://www.jstor.org/stable/48662035.
Awad, Edmond, Sydney Levine, Michael Anderson, Susan Leigh Anderson, Vincent Conitzer,
M.J. Crockett, Jim A.C. Everett, et al. 2022. “Computational Ethics.” Trends in Cognitive
Sciences 26 (5): 388-405. doi:10.1016/j.tics.2022.02.009.
Barry Lam, “The Precrime Unit,” January 31,2019, in Hi-Phi Nation, produced by Barry Lam of
UC Riverside, podcast, MP3 audio, 47:13:00
Boucheix Jean-Michel, Lowe Richard K., Thibaut Jean-Pierre, “A Developmental Perspective on
Young Children’s Understandings of Paired Graphics Conventions From an Analogy
Task” Frontiers in Psychology,! 1 (2020),
https://www.frontiersin.org/articles/10.3389/fpsyg.2020.02032

Brentano, Franz. Psychology From an Empirical Standpoint. London: Routledge, 2015.


Buccella, A., Springle, A. Phenomenology: What’s Al got to do with it?. Phenom Cogn Sci
(2022). https://doi.org/10.1007/sl 1097-022-09833-7
Dennett, Daniel. Consciousness Explained. New York, NY: Little, Brown and Company
Hachette Book Group, 2017.
Dennett, Daniel. The Intentional Stance. Cambridge, MA: The MIT Press, 2006
Daniel C. Dennett / Introduction by David G. Stork, Jan 9. “Did Hall Commit Murder?”, The
MIT Press Reader, April 24, 2023, thereader.mitpress.mit.edu/when-hal-ilss-computer-
ethics/.
Edmond Awad, Sydney Levine, Michael Anderson, Susan Leigh Anderson, Vincent Conitzer,
M.J. Crockett, Jim A.C. Everett, Theodoros Evgeniou, Alison Gopnik, Julian C. Jamison,
Tae Wan Kim, S. Matthew Liao, Michelle N. Meyer, John Mikhail, Kweku Opoku-
Agyemang, Jana Schaich Borg, Juliana Schroeder, Walter Sinnott-Armstrong, Marija
Slavkovik, Josh B. Tenenbaum, Computational ethics, Trends in Cognitive Sciences,
Volume 26, Issue 5,2022, Pages 388-405, ISSN 1364-6613,
https://doi.org/! 0.1016/j .tics.2022.02.009.
(https://www.sciencedirect.eom/science/article/pii/S 1364661322000456)
Foxall, Gordon R. “Ascribing Intentionality.” Behavior and Philosophy 37 (2009): 217-22.
http ://www.j stor. org/stable/41472436.
Hinton, Geoffrey E. "Connectionist learning procedures." In Machine learning, pp. 555-610.
Morgan Kaufmann, 1990.
Husserl, Edmund, Daniel O. Dahlstrom, and Elizabeth L. Wilson. 2014. Ideas for a Pure
Phenomenology and Phenomenological Philosophy. [Electronic Resource]. Hackett
Publishing Company, https ://search-ebscohost-
com.avoserv2.1ibrary.fordham.edu/login.aspx?direct=true&db=cat00989a&AN=ford.2662
077&site-eds-live.

57
Khmil, Volodymyr V., and I. S. Popovych. "Philosophical and psychological dimensions of
social expectations of personality." Anthropological Measurements of Philosophical
Research 16 (2019): 55-65.
Marx, Johannes, and Christine Tiefensee. “Of Animals, Robots and Men.” Historical Social
Research / Historische Sozialforschung 40, no. 4 (154) (2015): 70-91.
http://www.jstor.org/stable/24583247.
Mitchell, Melanie. Artificial Intelligence: A Guide for Thinking Humans. London: Pelican
Books, 2020.
Perez-Osorio, Jairo, and Agnieszka Wykowska. 2020. “Adopting the Intentional Stance toward
Natural and Artificial Agents.” Philosophical Psychology 33 (3): 369-95. https://search-
ebscohost-
com.avoserv2.1ibrary.fordham.edu/login.aspx?direct=true&db=phl&AN=PHL2401472&sit
e=eds-live.
Robbins, Philip, and Anthony I. Jack. “The Phenomenal Stance.” Philosophical Studies: An
International Journal for Philosophy in the Analytic Tradition 127, no. 1 (2006): 59-85.
http://www.jstor.org/stable/4321682.
Schwartz, Stephen P. A Brief History of Analytic Philosophy: From Russell to Rawls.
Chichester: Wiley-Blackwell, 2013.
Siewert, Charles. “Consciousness and Intentionality.” Stanford Encyclopedia of Philosophy.
Stanford University, April 4, 2022. https://plato.stanford.edu/ENTRIES/consciousness-
intentionality/.

58
ABSTRACT

Matthew David Johnson

BS, University of Oklahoma

Attributing Intentionality to Artificial Intelligence: An Investigation

Thesis directed by Peter Tan, Ph.D.


Intentionality, a mental phenomenon characterized by the aboutness, directedness, or givenness

of some object in the world is often defined by a few discrete traits. This thesis is directed at ascribing

these traits to three different theories of intentionality, specifically phenomenological, analytical, and

systematic intentionality. Furthermore, I probe if any one particular theory is best suited to attribute to the

growing technology in Artificial Intelligence (AI). First, I’ve presented each theoiy and ascribed the traits

of reflexivity, form, and detachability. After each theoiy is adequately defined, it becomes apparent that

the analytic approach is the best suited to support current Al convolutional neural networks. However,

growing capabilities and our perspective on these machines will call for a more refined approach as the

limit of convolutional neural networks is reached. These limits are characterized by the technical flaws

that arise in neural networks as we increasingly attempt to make these AI machines think more like their

human inventors.

A different kind of intentional position will be needed to support this endeavor, specifically the

systematic interpretation of intentionality. Utilizing Daniel Dennett’s intentional stance, we can grant

greater flexibility in the integration of various interpretations of intentionality. Although a rational

approach to begin with, Dennett’s position needs refinement in order to adequately apply it to the current

and future technological landscape. By implementing a modified intentional stance, we can provide an

avenue by which we as philosophers and scientists can investigate further claims on consciousnesses and

highlight what parameters can be attributed to AI systems as they inevitably become more advanced.
VITA

Matthew David Johnson, son of Christopher and Carrie Johnson, was bom September 24,

1992, in Bremerton, Washington. After graduating in 2011 from Norman High School, he

entered the University of Oklahoma as the recipient of the Naval Reserve Officer Training Corps

National Scholarship. In 2015, he received a Bachelor of Science degree in Human Behavior.

That same year he was commissioned into the United States Navy as a Surface Warfare Officer

and has served on two Guided Missile Destroyers across three deployments.

While remaining on active duty, he was accepted to Fordham University in January 2021

through the undergraduate PCS program. He was later accepted into the Master of Philosophy

program in September 2021. While working toward his master’s degree, under the mentorship of

Dr. Peter Tan, he worked with the New York City Naval Reserve Officer Training Corps Unit as

an Assistant Professor of Naval Science at SUNY Maritime College, Columbia University, and

Fordham University.
ProQuest Number: 30489362

INFORMATION TO ALL USERS


The quality and completeness of this reproduction is dependent on the quality
and completeness of the copy made available to ProQuest.

ProQuest.

Distributed by ProQuest LLC ( 2023 ).


Copyright of the Dissertation is held by the Author unless otherwise noted.

This work may be used in accordance with the terms of the Creative Commons license
or other rights statement, as indicated in the copyright statement or in the metadata
associated with this work. Unless otherwise specified in the copyright statement
or the metadata, all rights are reserved by the copyright holder.

This work is protected against unauthorized copying under Title 17,


United States Code and other applicable copyright laws.

Microform Edition where available © ProQuest LLC. No reproduction or digitization


of the Microform Edition is authorized without permission of ProQuest LLC.

ProQuest LLC
789 East Eisenhower Parkway
P.O. Box 1346
Ann Arbor, MI 48106 - 1346 USA

You might also like