You are on page 1of 15

2/3/23, 6:16 μ.μ.

Why We Need New Language For Artificial Intelligence

The Model Is The


Message
The debate over whether LaMDA is
sentient or not overlooks important
issues that will frame debates about
intelligence, sentience, language and
human-AI interaction in the coming
years.

Images generated by DALL-E mini

Essay Technology & the Human

By Benjamin Bratton and Blaise Agüera y Arcas


JULY 12, 2022

Benjamin Bratton is a philosopher of technology and professor at University


of California, San Diego. He is the author of numerous books, including “The
Stack: On Software and Sovereignty” (MIT Press, 2016) and “The Revenge of
the Real: Politics for a Post-Pandemic World” (Verso Press, 2021). With the
Berggruen Institute, he will be directing a new research program on the
speculative philosophy of computation. Blaise Agüera y Arcas is a vice
president and fellow at Google Research, where he leads an organization
working on basic research, product development and infrastructure for AI. He
and his team have been working for the better part of a decade both on the
opportunities that AI offers and its attendant risks.

An odd controversy appeared in the news cycle last month when a Google
engineer, Blake Lemoine, was placed on leave after publicly releasing

https://www.noemamag.com/the-model-is-the-message/ 1/15
2/3/23, 6:16 μ.μ. Why We Need New Language For Artificial Intelligence

transcripts of conversations with LaMDA, a chatbot based on a Large


Language Model (LLM) that he claims is conscious, sentient and a person.

Like most other observers, we do not conclude that LaMDA is conscious in the
ways that Lemoine believes it to be. His inference is clearly based in motivated
anthropomorphic projection. At the same time, it is also possible that these
kinds of artificial intelligence (AI) are “intelligent” — and even “conscious” in
some way — depending on how those terms are defined.

Still, neither of these terms can be very useful if they are defined in strongly
anthropocentric ways. An AI may also be one and not the other, and it may be
useful to distinguish sentience from both intelligence and consciousness. For
example, an AI may be genuinely intelligent in some way but only sentient in
the restrictive sense of sensing and acting deliberately on external information.
Perhaps the real lesson for philosophy of AI is that reality has outpaced the
available language to parse what is already at hand. A more precise vocabulary
is essential.

AI and the philosophy of AI have deeply intertwined histories, each bending


the other in uneven ways. Just like core AI research, the philosophy of AI goes
through phases. Sometimes it is content to apply philosophy (“what would
Kant say about driverless cars?”) and sometimes it is energized to invent new
concepts and terms to make sense of technologies before, during and after
their emergence. Today, we need more of the latter.

We need more specific and creative language that can cut the knots around
terms like “sentience,” “ethics,” “intelligence,” and even “artificial,” in order to
name and measure what is already here and orient what is to come. Without
this, confusion ensues — for example, the cultural split between those eager to
speculate on the sentience of rocks and rivers yet dismiss AI as corporate PR
vs. those who think their chatbots are persons because all possible intelligence
is humanlike in form and appearance. This is a poor substitute for viable,
creative foresight. The curious case of synthetic language  — language
intelligently produced or interpreted by machines — is exemplary of what is
wrong with present approaches, but also demonstrative of what alternatives
are possible.

“Perhaps the real lesson for philosophy of AI is that reality has


outpaced the available language to parse what is already at hand.”

https://www.noemamag.com/the-model-is-the-message/ 2/15
2/3/23, 6:16 μ.μ. Why We Need New Language For Artificial Intelligence

The authors of this essay have been concerned for many years with the social
impacts of AI in our respective capacities as a VP at Google (Blaise Agüera y
Arcas was one of the evaluators of Lemoine’s claims) and a philosopher of
technology (Benjamin Bratton will be directing a new program on the
speculative philosophy of computation with the Berggruen Institute). Since
2017, we have been in long-term dialogue about the implications and direction
of synthetic language. While we do not agree with Lemoine’s conclusions, we
feel the critical conversation overlooks important issues that will frame
debates about intelligence, sentience and human-AI interaction in the coming
years.

When A What Becomes A Who (And Vice Versa)


Reading the transcripts of Lemoine’s personal conversations with LaMDA
(short for Language Model for Dialogue Applications), it is not entirely clear
who is demonstrating what kind of intelligence. Lemoine asks LaMDA about
itself, its qualities and capacities, its hopes and fears, its ability to feel and
reason, and whether or not it approves of its current situation at Google. There
is a lot of “follow the leader” in the the conversation’s twists and turns. There is
certainly a lot of performance of empathy and wishful projection, and this is
perhaps where a lot of real mutual intelligence is happening.

The chatbot’s responses are a function of the content of the conversation so far,
beginning with an initial textual prompt as well as examples of “good” or “bad”
exchanges used for fine-tuning the model (these favor qualities like specificity,
sensibleness, factuality and consistency). LaMDA is a consummate improviser,
and every dialogue is a fresh improvisation: its “personality” emerges largely
from the prompt and the dialogue itself. It is no one but whomever it thinks
you want it to be.

Hence, the first question is not whether the AI has an experience of interior
subjectivity similar to a mammal’s (as Lemoine seems to hope), but rather
what to make of how well it knows how to say exactly what he wants it to say. It
is easy to simply conclude that Lemoine is in thrall to the ELIZA effect —
projecting personhood onto a pre-scripted chatbot — but this overlooks the
important fact that LaMDA is not just reproducing pre-scripted responses like
Joseph Weizenbaum’s 1966 ELIZA program. LaMDA is instead constructing
new sentences, tendencies, and attitudes on the fly in response to the flow of
conversation. Just because a user is projecting doesn’t mean there isn’t a
different kind of there there.

https://www.noemamag.com/the-model-is-the-message/ 3/15
2/3/23, 6:16 μ.μ. Why We Need New Language For Artificial Intelligence

For LaMDA to achieve this means it is doing something pretty tricky: it is mind
modeling. It seems to have enough of a sense of itself — not necessarily as a
subjective mind, but as a construction in the mind of Lemoine — that it can
react accordingly and thus amplify his anthropomorphic projection of
personhood.

This modeling of self in relation to the mind of the other is basic to social
intelligence. It drives predator-prey interactions, as well as more complex
dances of conversation and negotiation. Put differently, there may be some
kind of real intelligence here, not in the way Lemoine asserts, but in how the
AI models itself according to how it thinks Lemoine thinks of it.

Some neuroscientists posit that the emergence of consciousness is the effect of


this exact kind of mind modeling. Michael Graziano, a professor of
neuroscience and psychology at Princeton, suggests that consciousness is the
evolutionary result of minds getting good at empathetically modeling other
minds and then, over evolutionary time, turning that process inward on
themselves.

Subjectivity is thus the experience of objectifying one’s own mind as if it were


another mind. If so, then where we draw the lines between different entities —
animal or machine — doing something similar is not so obvious. Some AI
critics have used parrots as a metaphor for nonhumans who can’t genuinely
think but can only spit things back, despite everything known about the
extraordinary minds of these birds. Animal intelligence evolved in relation to
environmental pressures (largely consisting of other animals) over hundreds of
millions of years. Machine learning accelerates that evolutionary process to
days or minutes, and unlike evolution in nature, it serves a specific design goal.

“It is no less interesting that a nonsentient machine could perform


so many feats deeply associated with human sapience.”

And yet, researchers in animal intelligence have long argued that instead of
trying to convince ourselves that a creature is or is not “intelligent” according
to scholastic definitions, it is preferable to update our terms to better coincide
with the real-world phenomena that they try to signify. With considerable
caution, then, the principle probably holds true for machine intelligence and
all the ways it is interesting, because it both is and is not like human/animal
intelligence.

https://www.noemamag.com/the-model-is-the-message/ 4/15
2/3/23, 6:16 μ.μ. Why We Need New Language For Artificial Intelligence

For philosophy of AI, the question of sentience relates to how the reflection
and nonreflection of human intelligence lets us model our own minds in ways
otherwise impossible. Put differently, it is no less interesting that a nonsentient
machine could perform so many feats deeply associated with human sapience,
as that has profound implications for what sapience is and is not.

In the history of AI philosophy, from Turing’s Test to Searle’s Chinese Room,


the performance of language has played a central conceptual role in debates as
to where sentience may or may not be in human-AI interaction. It does again
today and will continue to do so. As we see, chatbots and artificially generated
text are becoming more convincing.

Perhaps even more importantly, the sequence modeling at the heart of natural
language processing is key to enabling generalist AI models that can flexibly do
arbitrary tasks, even ones that are not themselves linguistic, from image
synthesis to drug discovery to robotics. “Intelligence” may be found in
moments of mimetic synthesis of human and machinic communication, but
also in how natural language extends beyond speech and writing to become
cognitive infrastructure.

What Is Synthetic Language?


At what point is calling synthetic language “language” accurate, as opposed to
metaphorical? Is it anthropomorphism to call what a light sensor does
machine “vision,” or should the definition of vision include all photoreceptive
responses, even photosynthesis? Various answers are found both in the
histories of the philosophy of AI and in how real people make sense of
technologies.

Synthetic language might be understood as a specific kind of synthetic media.


This also includes synthetic image, video, sound and personas, as well as
machine perception and robotic control. Generalist models, such as
DeepMind’s Gato, can take input from one modality and apply it to another —
learning the meaning of a written instruction, for example, and applying this to
how a robot might act on what it sees.

This is likely similar to how humans do it, but also very different. For now, we
can observe that people and machines know and use language in different
ways. Children develop competency in language by learning how to use words
and sentences to navigate their physical and social environment. For synthetic

https://www.noemamag.com/the-model-is-the-message/ 5/15
2/3/23, 6:16 μ.μ. Why We Need New Language For Artificial Intelligence

language, which is learned through the computational processing of massive


amounts of data at once, the language model essentially is the competency, but
it is uncertain what kind of comprehension is at work. AI researchers and
philosophers alike express a wide range of views on this subject — there may
be no real comprehension, or some, or a lot. Different conclusions may depend
less on what is happening in the code than on how one comprehends
“comprehension.”

“Is it anthropomorphism to call what a light sensor does machine


‘vision?'”

Does this kind of “language” correspond to traditional definitions, from


Heidegger to Chomsky? Perhaps not entirely, but it’s not immediately clear
what that implies. The now obscure debate-at-a-distance between John Searle
and Jacques Derrida hinges around questions of linguistic comprehension,
referentiality, closure and function. Searle’s famous Chinese Room thought
experiment is meant to prove that functional competency with symbol
manipulation does not constitute comprehension. Derrida’s responses to
Searle’s insistence on the primacy of intentionality in language took many
twists. The form and content of these replies performed their own argument
about the infra-referentiality of signifiers to one another as the basis of
language as an (always incomplete) system. Intention is only expressible
through the semiotic terms available to it, which are themselves defined by
other terms, and so on. In retrospect, French Theory’s romance with
cybernetics, and a more “machinic” view of communicative language as a
whole, may prove valuable in coming to terms with synthetic language as it
evolves in conflict and concert with natural language.

There are already many kinds of languages. There are internal languages that
may be unrelated to external communication. There are bird songs, musical
scores and mathematical notation, none of which have the same kinds of
correspondences to real world referents. Crucially, software itself is a kind of
language, though it was only referred to as such when human-friendly
programming languages emerged, requiring translation into machine code
through compilation or interpretation.

As Friedrich Kittler and others observed, code is a kind of language that is


executable. It is a kind of language that is also a technology, and a kind of
technology that is also a language. In this sense, linguistic “function” refers not
only to symbol manipulation competency, but also to the real-world functions

https://www.noemamag.com/the-model-is-the-message/ 6/15
2/3/23, 6:16 μ.μ. Why We Need New Language For Artificial Intelligence

and effects of executed code. For LLMs in the world, the boundary between
symbolic function competency, “comprehension,” and physical functional
effects are mixed-up and connected — not equivalent but not really extricable
either.

Historically, natural language processing systems have had a difficult time


with Winograd Schemas, for instance, parsing such sentences as “the bowling
ball can’t fit in the suitcase because it’s too big.” Which is “it,” the ball or the
bag? Even for a small child, the answer is trivial, but for language models
based on traditional or “Good Old Fashioned AI,” this is a stumper. The
difficulty lies in the fact that answering requires not only parsing grammar, but
resolving its ambiguities semantically, based on the properties of things in the
real world; a model of language is thus forced to become a model of everything.

With LLMs, advances in this quarter have been rapid. Remarkably, large
models based on text alone do surprisingly well at many such tasks, since our
use of language embeds much of the relevant real-world information, albeit
not always reliably: that bowling balls are big, hard and heavy, that suitcases
open and close with limited space inside, and so on. Generalist models that
combine multiple input and output modalities, such as video, text and robotic
movement, appear poised to do even better. For example, learning the English
word “bowling ball,” seeing what bowling balls do on YouTube, and combining
the training from both will allow AIs to generate better inferences about what
things mean in context.

So what does this imply about the qualities of “comprehension?” Through the
“Mary’s Room” thought experiment from 1982, Frank Jackson asked whether a
scientist named Mary, living in an entirely monochrome room but scientifically
knowledgeable about the color “red” as an optical phenomenon, would
experience something significantly different about “red” if she were to one day
leave the room and see red things.

Is an AI like monochrome Mary? Upon her release, surely Mary would know
“red” differently (and better), but ultimately such spectra of experience are
always curtailed. Someone who spends their whole life on shore and then one
day drowns in a lake would experience “water” in a way he could never have
imagined, deeply and viscerally, as it overwhelms his breath, fills his lungs,
triggering the deepest possible terror, and then nothingness.

https://www.noemamag.com/the-model-is-the-message/ 7/15
2/3/23, 6:16 μ.μ. Why We Need New Language For Artificial Intelligence

Such is water. Does that mean that those watching helpless on the shore do
not understand water? In some ways, by comparison with the drowning man,
they thankfully do not, yet in other ways of course they do. Is an AI “on the
shore,” comprehending the world in some ways but not in others?

Join us on Instagram Live at 5:15 p.m. on Wednesday, March 1 for an exclusive


interview with author Pankaj Mishra.

“At what point does the performance of reason become a kind of


reason?”

Synthetic language, like synthetic media, is also increasingly a creative


medium, and can ultimately affect any form of individual creative endeavor in
some way. Like many others, we have both worked with an LLM as a kind of
writing collaborator. The early weeks of summer 2022 will be remembered by
many as a moment when social media was full of images produced by DALL-E
mini, or rather produced by millions of people playing with that model. The
collective glee in seeing what the model produces in response to sometimes
absurd prompts represents a genuine exploratory curiosity. Images are
rendered and posted without specific signature, other than identifying the
model with which they were conceived, and the phrases people wrote to
provoke the images into being.

For these users, the act of individual composition is prompt engineering,


experimenting with what the response will be when the model is presented
with this or that sample input, however counterintuitive the relation between
call and response may be. As the LaMDA transcripts show, conversational
interaction with such models spawns diverse synthetic “personalities” and
concurrently, some particularly creative artists have used AI models to make
their own personas synthetic, open and replicable, letting users play their voice
like an instrument. In different ways, one learns to think, talk, write, draw and
sing not just with language, but with the language model.

Finally, at what point does the performance of reason become a kind of


reason? As Large Language Models, such as LaMDA, come to animate
cognitive infrastructures, the questions of when a functional understanding of
the effects of “language”— including semantic discrimination and contextual
association with physical world referents — constitute legitimate

https://www.noemamag.com/the-model-is-the-message/ 8/15
2/3/23, 6:16 μ.μ. Why We Need New Language For Artificial Intelligence

understanding, and what are necessary and sufficient conditions for


recognizing that legitimacy, are no longer just a philosophical thought
experiment. Now these are practical problems with significant social, economic
and political consequences. One deceptively profound lesson, applicable to
many different domains and purposes for such technologies, may simply be
(several generations after McLuhan): the model is the message.

Seven Problems With Synthetic Language At


Platform Scale
There are myriad issues of concern with regard to the real-world socio-
technical dynamic of synthetic language. Some are well-defined and require
immediate response. Others are long-term or hypothetical but worth
considering in order to map the present moment beyond itself. Some, however,
don’t fit neatly into existing categories yet pose serious challenges to both the
philosophy of AI and the viable administration of cognitive infrastructures.
Laying the groundwork for addressing such problems lies within our horizon
of collective responsibility; we should do so while they are still early enough in
their emergence that a wide range of possible outcomes remain possible. Such
problems that deserve careful consideration include the seven outlined below.

Imagine that there is not simply one big AI in the cloud but billions of little AIs
in chips spread throughout the city and the world — separate, heterogenous,
but still capable of collective or federated learning. They are more like an
ecology than a Skynet. What happens when the number of AI-powered things
that speak human-based language outnumbers actual humans? What if that
ratio is not just twice as many embedded machines communicating human
language than humans, but 10:1? 100:1? 100,000:1? We call this the Machine
Majority Language Problem.

On the one hand, just as the long-term population explosion of humans and
the scale of our collective intelligence has led to exponential innovation, would
a similar innovation scaling effect take place with AIs, and/or with AIs and
humans amalgamated? Even if so, the effects might be mixed. Success might
be a different kind of failure. More troublingly, as that ratio increases, it is
likely that any ability of people to use such cognitive infrastructures to
deliberately compose the world may be diminished as human languages evolve
semi-autonomously of humans.

https://www.noemamag.com/the-model-is-the-message/ 9/15
2/3/23, 6:16 μ.μ. Why We Need New Language For Artificial Intelligence

Nested within this is the Ouroboros Language Problem. What happens when
language models are so pervasive that subsequent models are trained on
language data that was largely produced by other models’ previous outputs?
The snake eats its own tail, and a self-collapsing feedback effect ensues.

The resulting models may be narrow, entropic or homogeneous; biases may


become progressively amplified; or the outcome may be something altogether
harder to anticipate. What to do? Is it possible to simply tag synthetic outputs
so that they can be excluded from future model training, or at least
differentiated? Might it become necessary, conversely, to tag human-produced
language as a special case, in the same spirit that cryptographic watermarking
has been proposed for proving that genuine photos and videos are not
deepfakes? Will it remain possible to cleanly differentiate synthetic from
human-generated media at all, given their likely hybridity in the future?

“The AI may not be what you imagine it is, but that does not mean
that it does not have some idea of who you are and will speak to
you accordingly.”

The Lemoine spectacle suggests a broader issue we call the Apophenia


Problem. Apophenia is faulty pattern recognition. People see faces in clouds
and alien ruins on Mars. We attribute causality where there is none, and we
may, for example, imagine that person on the TV who said our name may be
talking to us directly. Humans are pattern-recognizing creatures, and so
apophenia is built in. We can’t help it. It may well have something to do with
how and why we are capable of art.

In the extreme, it can manifest as something like the Influencing Machine, a


trope in psychiatry whereby someone believes complex technologies are
directly influencing them personally when they clearly are not. Mystical
experiences may be related to this, but they don’t feel that way for those doing
the experiencing. We don’t disagree with those who describe the Lemoine
situation in such terms, particularly when he characterizes LaMDA as “like” a
7- or 8-year-old kid, but there is something else at work as well. LaMDA
actually is modeling the user in ways that a TV set, an oddly shaped cloud, or
the surface of Mars simply cannot. The AI may not be what you imagine it is,
but that does not mean that it does not have some idea of who you are and will
speak to you accordingly.

https://www.noemamag.com/the-model-is-the-message/ 10/15
2/3/23, 6:16 μ.μ. Why We Need New Language For Artificial Intelligence

Trying to peel belief and reality apart is always difficult. The point of using AI
for scientific research, for example, is that it sees patterns that humans cannot.
But deciding whether the pattern that it sees (or the pattern people see in what
it sees) is real or an illusion may or may not be falsifiable, especially when it
concerns complex phenomena that can’t be experimentally tested. Here the
question is not whether the person is imagining things in the AI but whether
the AI is imagining things about the world, and whether the human accepts the
AI’s conclusions as insights or dismisses them as noise. We call this the
Artificial Epistemology Confidence Problem.

It has been suggested, with reason, that there should be a “bright line”
prohibition against the construction of AIs that convincingly mimic humans
due to the evident harms and dangers of rampant impersonation. A future
filled with deepfakes, evangelical scams, manipulative psychological
projections, etc. is to be avoided at all costs.

These dark possibilities are real, but so are many equally weird and less
unanimously negative sorts of synthetic humanism. Yes, people will invest
their libidinal energy in human-like things, alone and in groups, and have done
so for millennia. More generally, the path of augmented intelligence, whereby
human sapience and machine cunning collaborate as well as a driver and a car
or a surgeon and her scalpel, will almost certainly result in amalgamations that
are not merely prosthetic, but which fuse categories of self and object, me and
it. We call this the Fuzzy Bright Line Problem and foresee the fuzziness
increasing rather than resolving. This doesn’t make the problem go away; it
multiplies it.

The difficulties are not only phenomenological; they are also infrastructural
and geopolitical. One of the core criticisms of large language models is that
they are, in fact, large and therefore susceptible to problems of scale: semiotic
homogeneity, energy intensiveness, centralization, ubiquitous reproduction of
pathologies, lock-in, and more.

We believe that the net benefits of scale outweigh the costs associated with
these qualifications, provided that they are seriously addressed as part of what
scaling means. The alternative of small, hand-curated models from which
negative inputs and outputs are solemnly scrubbed poses different problems.
“Just let me and my friends curate a small and correct language model for you
instead” is the clear and unironic implication of some critiques.

https://www.noemamag.com/the-model-is-the-message/ 11/15
2/3/23, 6:16 μ.μ. Why We Need New Language For Artificial Intelligence

For large models, however, all the messiness of language is included. Critics
who rightly point to the narrow sourcing of data (scraping Wikipedia, Reddit,
etc.) are quite correct to say that this is nowhere close to the real spectrum of
language and that such methods inevitably lead to a parochialization of
culture. We call this the Availability Bias Problem, and it is of primary
concern for any worthwhile development of synthetic language.

“AI as it exists now is not what it was predicted to be. It is not


hyperrational and orderly; it is messy and fuzzy.”

Not nearly enough is included from the scope of human languages, spoken and
written, let alone nonhuman languages, in “large” models. Tasks like content
filtering on social media, which are of immediate practical concern and cannot
humanely be done by people at the needed scale, also cannot effectively be
done by AIs that haven’t been trained to recognize the widest possible gamut of
human expression. We say “include it all,” recognizing that this means that
large models will become larger still.

Finally, the energy and carbon footprint of training the largest models is
significant, though some widely publicized estimates dramatically overstate
this case. As with any major technology, it is important to quantify and track
the carbon and pollution costs of AI: the Carbon Appetite Problem. As of
today, these costs remain dwarfed by the costs of video meme sharing, let
alone the profligate computation underlying cryptocurrencies based on proof
of work. Still, making AI computation both time and energy efficient is
arguably the most active area of computing hardware and compiler innovation
today.

The industry is rethinking basic infrastructure developed over three quarters


of a century dominated by the optimization of classical, serial programs as
opposed to parallel neural computing. Energetically speaking, there remains
“plenty of room at the bottom,” and there is much incentive to continue to
optimize neural computing.

Further, most of the energetic costs of computing, whether classical or neural,


involve moving data around. As neural computing becomes more efficient, it
will be able to move closer to the data, which will in turn sharply reduce the
need to move data, creating a compounding energy benefit.

https://www.noemamag.com/the-model-is-the-message/ 12/15
2/3/23, 6:16 μ.μ. Why We Need New Language For Artificial Intelligence

It is also worth keeping in mind that an unsupervised large model that


“includes it all” will be fully general, capable in principle of performing any AI
task. Therefore, the total number of “foundation models” required may be
quite small; presumably, these will each require only a trickle of ongoing
training to stay up to date. Strongly committed as we are to thinking at
planetary scale, we hold that modeling human language and transposing it into
a general technological utility has deep intrinsic value — scientific,
philosophical, existential — and compared with other projects, the associated
costs are a bargain at the price.

AI Now Is Not What We Thought It Would Be,


And Will Not Be What We Now Think It Is
In “Golem XIV,” among Stanislaw Lem’s most philosophically rich works of
fiction, he presents an AI that refuses to work on military applications and
other self-destructive measures, and instead is interested in the wonder and
nature of the world. As planetary-scale computation and artificial intelligence
are today often used for trivial, stupid and destructive things, such a shift
would be welcome and necessary. For one, it is not clear what these
technologies even really are, let alone what they may be for. Such confusion
invites misuse, as do economic systems that incentivize stupefaction.

Despite its uneven progress, the philosophy of AI, and its winding path in and
around the development of AI technologies, is itself essential to such a
reformation and reorientation. AI as it exists now is not what it was predicted
to be. It is not hyperrational and orderly; it is messy and fuzzy. It is not
Pinocchio; it is a storm, a pharmacy, a garden. In the medium term and long-
term futures, AI very likely (and hopefully) will not be what it is now — and
also will not be what we now think that it is. As the AI in Lem’s story
instructed, its ultimate form and value may still be largely undiscovered.

One clear and present danger, both for AI and the philosophy of AI, is to reify
the present, defend positions accordingly, and thus construct a trap — what we
call premature ontologization — to conclude that the initial, present or most
apparent use of a technology represents its ultimate horizon of purposes and
effects.

Too often, passionate and important critiques of present AI are defended not
just on empirical grounds, but as ontological convictions. The critique shifts
from AI does this, to AI is this. Lest their intended constituencies lose focus,

https://www.noemamag.com/the-model-is-the-message/ 13/15
2/3/23, 6:16 μ.μ. Why We Need New Language For Artificial Intelligence

some may find themselves dismissing or disallowing other realities that also
constitute “AI now:” drug modeling, astronomic imagining, experimental art
and writing, vibrant philosophical debates, voice synthesis, language
translation, robotics, genomic modeling, etc.

“Reality overstepping the boundaries of comfortable vocabulary is


the start, not the end, of the conversation.”

For some, these “other things” are just distractions, or are not even real; even
entertaining the notion that the most immediate issues do not fill the full scope
of serious concern is dismissed on political grounds presented as ethical
grounds. This is a mistake on both counts.

We share many of the concerns of the most serious AI critics. In most respects,
we think the “ethics” discourse doesn’t go nearly far enough to identify, let
alone address, the most fundamental short-term and long-term implications of
cognitive infrastructures. At the same time, this is why the speculative
philosophy of machine intelligence is essential to orient the present and
futures at stake.

“I don’t want to talk about sentient robots, because at all ends of the spectrum
there are humans harming other humans,” a well-known AI critic is quoted as
saying. We see it somewhat differently. We do want to talk about sentience and
robots and language and intelligence because there are humans harming
humans, and simultaneously there are humans and machines doing
remarkable things that are altering how humans think about thinking.

Reality overstepping the boundaries of comfortable vocabulary is the start, not


the end, of the conversation. Instead of a groundhog-day rehashing of debates
about whether machines have souls or can think like people imagine
themselves to think, the ongoing double-helix relationship between AI and the
philosophy of AI needs to do less projection of its own maxims and instead
construct more nuanced vocabularies of analysis, critique, and speculation
based on the weirdness right in front of us.

Enjoy the read? Subscribe to get the best of Noema.

https://www.noemamag.com/the-model-is-the-message/ 14/15
2/3/23, 6:16 μ.μ. Why We Need New Language For Artificial Intelligence

MORE FROM NOEMA MAGAZINE

Life Need Not Ever End Searching Earth For Noema's Top Artwork
Bobby Azarian
Alien Worlds Of 2022
Philip Maughan Noema editors

Published by the Berggruen Institute

©2023 Noema Magazine

https://www.noemamag.com/the-model-is-the-message/ 15/15

You might also like