Professional Documents
Culture Documents
An odd controversy appeared in the news cycle last month when a Google
engineer, Blake Lemoine, was placed on leave after publicly releasing
https://www.noemamag.com/the-model-is-the-message/ 1/15
2/3/23, 6:16 μ.μ. Why We Need New Language For Artificial Intelligence
Like most other observers, we do not conclude that LaMDA is conscious in the
ways that Lemoine believes it to be. His inference is clearly based in motivated
anthropomorphic projection. At the same time, it is also possible that these
kinds of artificial intelligence (AI) are “intelligent” — and even “conscious” in
some way — depending on how those terms are defined.
Still, neither of these terms can be very useful if they are defined in strongly
anthropocentric ways. An AI may also be one and not the other, and it may be
useful to distinguish sentience from both intelligence and consciousness. For
example, an AI may be genuinely intelligent in some way but only sentient in
the restrictive sense of sensing and acting deliberately on external information.
Perhaps the real lesson for philosophy of AI is that reality has outpaced the
available language to parse what is already at hand. A more precise vocabulary
is essential.
We need more specific and creative language that can cut the knots around
terms like “sentience,” “ethics,” “intelligence,” and even “artificial,” in order to
name and measure what is already here and orient what is to come. Without
this, confusion ensues — for example, the cultural split between those eager to
speculate on the sentience of rocks and rivers yet dismiss AI as corporate PR
vs. those who think their chatbots are persons because all possible intelligence
is humanlike in form and appearance. This is a poor substitute for viable,
creative foresight. The curious case of synthetic language — language
intelligently produced or interpreted by machines — is exemplary of what is
wrong with present approaches, but also demonstrative of what alternatives
are possible.
https://www.noemamag.com/the-model-is-the-message/ 2/15
2/3/23, 6:16 μ.μ. Why We Need New Language For Artificial Intelligence
The authors of this essay have been concerned for many years with the social
impacts of AI in our respective capacities as a VP at Google (Blaise Agüera y
Arcas was one of the evaluators of Lemoine’s claims) and a philosopher of
technology (Benjamin Bratton will be directing a new program on the
speculative philosophy of computation with the Berggruen Institute). Since
2017, we have been in long-term dialogue about the implications and direction
of synthetic language. While we do not agree with Lemoine’s conclusions, we
feel the critical conversation overlooks important issues that will frame
debates about intelligence, sentience and human-AI interaction in the coming
years.
The chatbot’s responses are a function of the content of the conversation so far,
beginning with an initial textual prompt as well as examples of “good” or “bad”
exchanges used for fine-tuning the model (these favor qualities like specificity,
sensibleness, factuality and consistency). LaMDA is a consummate improviser,
and every dialogue is a fresh improvisation: its “personality” emerges largely
from the prompt and the dialogue itself. It is no one but whomever it thinks
you want it to be.
Hence, the first question is not whether the AI has an experience of interior
subjectivity similar to a mammal’s (as Lemoine seems to hope), but rather
what to make of how well it knows how to say exactly what he wants it to say. It
is easy to simply conclude that Lemoine is in thrall to the ELIZA effect —
projecting personhood onto a pre-scripted chatbot — but this overlooks the
important fact that LaMDA is not just reproducing pre-scripted responses like
Joseph Weizenbaum’s 1966 ELIZA program. LaMDA is instead constructing
new sentences, tendencies, and attitudes on the fly in response to the flow of
conversation. Just because a user is projecting doesn’t mean there isn’t a
different kind of there there.
https://www.noemamag.com/the-model-is-the-message/ 3/15
2/3/23, 6:16 μ.μ. Why We Need New Language For Artificial Intelligence
For LaMDA to achieve this means it is doing something pretty tricky: it is mind
modeling. It seems to have enough of a sense of itself — not necessarily as a
subjective mind, but as a construction in the mind of Lemoine — that it can
react accordingly and thus amplify his anthropomorphic projection of
personhood.
This modeling of self in relation to the mind of the other is basic to social
intelligence. It drives predator-prey interactions, as well as more complex
dances of conversation and negotiation. Put differently, there may be some
kind of real intelligence here, not in the way Lemoine asserts, but in how the
AI models itself according to how it thinks Lemoine thinks of it.
And yet, researchers in animal intelligence have long argued that instead of
trying to convince ourselves that a creature is or is not “intelligent” according
to scholastic definitions, it is preferable to update our terms to better coincide
with the real-world phenomena that they try to signify. With considerable
caution, then, the principle probably holds true for machine intelligence and
all the ways it is interesting, because it both is and is not like human/animal
intelligence.
https://www.noemamag.com/the-model-is-the-message/ 4/15
2/3/23, 6:16 μ.μ. Why We Need New Language For Artificial Intelligence
For philosophy of AI, the question of sentience relates to how the reflection
and nonreflection of human intelligence lets us model our own minds in ways
otherwise impossible. Put differently, it is no less interesting that a nonsentient
machine could perform so many feats deeply associated with human sapience,
as that has profound implications for what sapience is and is not.
Perhaps even more importantly, the sequence modeling at the heart of natural
language processing is key to enabling generalist AI models that can flexibly do
arbitrary tasks, even ones that are not themselves linguistic, from image
synthesis to drug discovery to robotics. “Intelligence” may be found in
moments of mimetic synthesis of human and machinic communication, but
also in how natural language extends beyond speech and writing to become
cognitive infrastructure.
This is likely similar to how humans do it, but also very different. For now, we
can observe that people and machines know and use language in different
ways. Children develop competency in language by learning how to use words
and sentences to navigate their physical and social environment. For synthetic
https://www.noemamag.com/the-model-is-the-message/ 5/15
2/3/23, 6:16 μ.μ. Why We Need New Language For Artificial Intelligence
There are already many kinds of languages. There are internal languages that
may be unrelated to external communication. There are bird songs, musical
scores and mathematical notation, none of which have the same kinds of
correspondences to real world referents. Crucially, software itself is a kind of
language, though it was only referred to as such when human-friendly
programming languages emerged, requiring translation into machine code
through compilation or interpretation.
https://www.noemamag.com/the-model-is-the-message/ 6/15
2/3/23, 6:16 μ.μ. Why We Need New Language For Artificial Intelligence
and effects of executed code. For LLMs in the world, the boundary between
symbolic function competency, “comprehension,” and physical functional
effects are mixed-up and connected — not equivalent but not really extricable
either.
With LLMs, advances in this quarter have been rapid. Remarkably, large
models based on text alone do surprisingly well at many such tasks, since our
use of language embeds much of the relevant real-world information, albeit
not always reliably: that bowling balls are big, hard and heavy, that suitcases
open and close with limited space inside, and so on. Generalist models that
combine multiple input and output modalities, such as video, text and robotic
movement, appear poised to do even better. For example, learning the English
word “bowling ball,” seeing what bowling balls do on YouTube, and combining
the training from both will allow AIs to generate better inferences about what
things mean in context.
So what does this imply about the qualities of “comprehension?” Through the
“Mary’s Room” thought experiment from 1982, Frank Jackson asked whether a
scientist named Mary, living in an entirely monochrome room but scientifically
knowledgeable about the color “red” as an optical phenomenon, would
experience something significantly different about “red” if she were to one day
leave the room and see red things.
Is an AI like monochrome Mary? Upon her release, surely Mary would know
“red” differently (and better), but ultimately such spectra of experience are
always curtailed. Someone who spends their whole life on shore and then one
day drowns in a lake would experience “water” in a way he could never have
imagined, deeply and viscerally, as it overwhelms his breath, fills his lungs,
triggering the deepest possible terror, and then nothingness.
https://www.noemamag.com/the-model-is-the-message/ 7/15
2/3/23, 6:16 μ.μ. Why We Need New Language For Artificial Intelligence
Such is water. Does that mean that those watching helpless on the shore do
not understand water? In some ways, by comparison with the drowning man,
they thankfully do not, yet in other ways of course they do. Is an AI “on the
shore,” comprehending the world in some ways but not in others?
https://www.noemamag.com/the-model-is-the-message/ 8/15
2/3/23, 6:16 μ.μ. Why We Need New Language For Artificial Intelligence
Imagine that there is not simply one big AI in the cloud but billions of little AIs
in chips spread throughout the city and the world — separate, heterogenous,
but still capable of collective or federated learning. They are more like an
ecology than a Skynet. What happens when the number of AI-powered things
that speak human-based language outnumbers actual humans? What if that
ratio is not just twice as many embedded machines communicating human
language than humans, but 10:1? 100:1? 100,000:1? We call this the Machine
Majority Language Problem.
On the one hand, just as the long-term population explosion of humans and
the scale of our collective intelligence has led to exponential innovation, would
a similar innovation scaling effect take place with AIs, and/or with AIs and
humans amalgamated? Even if so, the effects might be mixed. Success might
be a different kind of failure. More troublingly, as that ratio increases, it is
likely that any ability of people to use such cognitive infrastructures to
deliberately compose the world may be diminished as human languages evolve
semi-autonomously of humans.
https://www.noemamag.com/the-model-is-the-message/ 9/15
2/3/23, 6:16 μ.μ. Why We Need New Language For Artificial Intelligence
Nested within this is the Ouroboros Language Problem. What happens when
language models are so pervasive that subsequent models are trained on
language data that was largely produced by other models’ previous outputs?
The snake eats its own tail, and a self-collapsing feedback effect ensues.
“The AI may not be what you imagine it is, but that does not mean
that it does not have some idea of who you are and will speak to
you accordingly.”
https://www.noemamag.com/the-model-is-the-message/ 10/15
2/3/23, 6:16 μ.μ. Why We Need New Language For Artificial Intelligence
Trying to peel belief and reality apart is always difficult. The point of using AI
for scientific research, for example, is that it sees patterns that humans cannot.
But deciding whether the pattern that it sees (or the pattern people see in what
it sees) is real or an illusion may or may not be falsifiable, especially when it
concerns complex phenomena that can’t be experimentally tested. Here the
question is not whether the person is imagining things in the AI but whether
the AI is imagining things about the world, and whether the human accepts the
AI’s conclusions as insights or dismisses them as noise. We call this the
Artificial Epistemology Confidence Problem.
It has been suggested, with reason, that there should be a “bright line”
prohibition against the construction of AIs that convincingly mimic humans
due to the evident harms and dangers of rampant impersonation. A future
filled with deepfakes, evangelical scams, manipulative psychological
projections, etc. is to be avoided at all costs.
These dark possibilities are real, but so are many equally weird and less
unanimously negative sorts of synthetic humanism. Yes, people will invest
their libidinal energy in human-like things, alone and in groups, and have done
so for millennia. More generally, the path of augmented intelligence, whereby
human sapience and machine cunning collaborate as well as a driver and a car
or a surgeon and her scalpel, will almost certainly result in amalgamations that
are not merely prosthetic, but which fuse categories of self and object, me and
it. We call this the Fuzzy Bright Line Problem and foresee the fuzziness
increasing rather than resolving. This doesn’t make the problem go away; it
multiplies it.
The difficulties are not only phenomenological; they are also infrastructural
and geopolitical. One of the core criticisms of large language models is that
they are, in fact, large and therefore susceptible to problems of scale: semiotic
homogeneity, energy intensiveness, centralization, ubiquitous reproduction of
pathologies, lock-in, and more.
We believe that the net benefits of scale outweigh the costs associated with
these qualifications, provided that they are seriously addressed as part of what
scaling means. The alternative of small, hand-curated models from which
negative inputs and outputs are solemnly scrubbed poses different problems.
“Just let me and my friends curate a small and correct language model for you
instead” is the clear and unironic implication of some critiques.
https://www.noemamag.com/the-model-is-the-message/ 11/15
2/3/23, 6:16 μ.μ. Why We Need New Language For Artificial Intelligence
For large models, however, all the messiness of language is included. Critics
who rightly point to the narrow sourcing of data (scraping Wikipedia, Reddit,
etc.) are quite correct to say that this is nowhere close to the real spectrum of
language and that such methods inevitably lead to a parochialization of
culture. We call this the Availability Bias Problem, and it is of primary
concern for any worthwhile development of synthetic language.
Not nearly enough is included from the scope of human languages, spoken and
written, let alone nonhuman languages, in “large” models. Tasks like content
filtering on social media, which are of immediate practical concern and cannot
humanely be done by people at the needed scale, also cannot effectively be
done by AIs that haven’t been trained to recognize the widest possible gamut of
human expression. We say “include it all,” recognizing that this means that
large models will become larger still.
Finally, the energy and carbon footprint of training the largest models is
significant, though some widely publicized estimates dramatically overstate
this case. As with any major technology, it is important to quantify and track
the carbon and pollution costs of AI: the Carbon Appetite Problem. As of
today, these costs remain dwarfed by the costs of video meme sharing, let
alone the profligate computation underlying cryptocurrencies based on proof
of work. Still, making AI computation both time and energy efficient is
arguably the most active area of computing hardware and compiler innovation
today.
https://www.noemamag.com/the-model-is-the-message/ 12/15
2/3/23, 6:16 μ.μ. Why We Need New Language For Artificial Intelligence
Despite its uneven progress, the philosophy of AI, and its winding path in and
around the development of AI technologies, is itself essential to such a
reformation and reorientation. AI as it exists now is not what it was predicted
to be. It is not hyperrational and orderly; it is messy and fuzzy. It is not
Pinocchio; it is a storm, a pharmacy, a garden. In the medium term and long-
term futures, AI very likely (and hopefully) will not be what it is now — and
also will not be what we now think that it is. As the AI in Lem’s story
instructed, its ultimate form and value may still be largely undiscovered.
One clear and present danger, both for AI and the philosophy of AI, is to reify
the present, defend positions accordingly, and thus construct a trap — what we
call premature ontologization — to conclude that the initial, present or most
apparent use of a technology represents its ultimate horizon of purposes and
effects.
Too often, passionate and important critiques of present AI are defended not
just on empirical grounds, but as ontological convictions. The critique shifts
from AI does this, to AI is this. Lest their intended constituencies lose focus,
https://www.noemamag.com/the-model-is-the-message/ 13/15
2/3/23, 6:16 μ.μ. Why We Need New Language For Artificial Intelligence
some may find themselves dismissing or disallowing other realities that also
constitute “AI now:” drug modeling, astronomic imagining, experimental art
and writing, vibrant philosophical debates, voice synthesis, language
translation, robotics, genomic modeling, etc.
For some, these “other things” are just distractions, or are not even real; even
entertaining the notion that the most immediate issues do not fill the full scope
of serious concern is dismissed on political grounds presented as ethical
grounds. This is a mistake on both counts.
We share many of the concerns of the most serious AI critics. In most respects,
we think the “ethics” discourse doesn’t go nearly far enough to identify, let
alone address, the most fundamental short-term and long-term implications of
cognitive infrastructures. At the same time, this is why the speculative
philosophy of machine intelligence is essential to orient the present and
futures at stake.
“I don’t want to talk about sentient robots, because at all ends of the spectrum
there are humans harming other humans,” a well-known AI critic is quoted as
saying. We see it somewhat differently. We do want to talk about sentience and
robots and language and intelligence because there are humans harming
humans, and simultaneously there are humans and machines doing
remarkable things that are altering how humans think about thinking.
https://www.noemamag.com/the-model-is-the-message/ 14/15
2/3/23, 6:16 μ.μ. Why We Need New Language For Artificial Intelligence
Life Need Not Ever End Searching Earth For Noema's Top Artwork
Bobby Azarian
Alien Worlds Of 2022
Philip Maughan Noema editors
https://www.noemamag.com/the-model-is-the-message/ 15/15