You are on page 1of 39

The Participation Game

A Post-Turing Frontier for


Generative AI Systems

Dr Mark Thomas Kennedy


Co-Director, Data Science institute
Imperial College London




















In this talk, we will …

• Start with the Turing Test


• See why we are moving beyond it
• Gain intuitions for generative AI
• Place generative AI in the wider ML landscape
• Talk about implications
• A short story of practical user tips

Since Alan Turing proposed his


“imitation game” as a challenge
for AI, conversation and natural
language processing have been
central to scholarly exploration of
what it means to think like humans.

Why move beyond
the Turing test?

Too easy?

(See Searle’s 1980 CRA)


Why move beyond
the Turing test?

Already passed?

Medal for the Loebner Prize, which ran 1990-2020


(see Epstein 1992; Halpern 2006; Floridi and Chiratti 2020).
This sort of exercise is passé in light of the “bitter lesson” (Sutton 2019)
and obsoleted by LLMs (e.g., Brown, Mann, Ryder, Subbiah, Kaplan et al. 2020).










Why move beyond
the Turing test?
New questions
Why move
Virtual beyond
team-mates the Turing test?
raise questions
about human-AI collaboration and trust,
especially in MMORPGs
• Too easy? (Searle’s 1980 CRA)
• Already passed? (Loebner Prize 1990-2020)
• New questions (Human-AI trust in teams, esp. in games)

2019: AI team-mates most requested feature


from players of Tom Clancy’s “Ghost Recon”



Briefly, an aside on technology …


From learning to program

to programs that learn
Automation
Capital-for-labour
substitution

Augmentation
Creativity and
productivity improvement



All the world’s words Deep learning
as data
breakthroughs

Thank you, Attention mechanism for


digital generative pre-trained transformers
transformation

Extremely fast
Thank you, supercomputers
gamers!




And later, Sutton’s (2019) “Bitter Lesson” essay.
The distributional* hypothesis
An important insight from Harris (1954)

• “the parts of a language do not occur arbitrarily relative to each other: each element occurs
in certain positions relative to certain other elements” (Harris 1954:146).
• “it is possible to describe the occurrence of each element indirectly, by successive
groupings into sets, in such a way that the total statements about the groupings of
elements into sets and the relative occurrence of the sets are fewer and simpler than the
total statements about the relative occurrence of each element directly” (Harris 1954:147)
• In other words, the neighbourhoods of words reveal their meanings. This insight suggests
we can represent word meanings in vectors that are relatively small and relatively dense.

* In this context, the word “distribution” conveys the quest to understand the semantics of a language by “not only the empirical discovery of what are its irreducible
elements and their relative occurrence, but also the mathematical search for a simple set of ordered statements that will express the empirical facts” (Harris 1954:148).

You are sitting in a comfortable chair by the fire, on a cold


winter’s night. Perhaps you have a mug of tea in hand,
perhaps something stronger. You open a magazine to an
article you’ve been meaning to read. The title suggested a
story about a promising — but also potentially dangerous
— new technology on the cusp of becoming mainstream,
and after reading only a few sentences, you find yourself
pulled into the story. A revolution is coming in machine
intelligence, the author argues, and we need, as a society,
to get better at anticipating its consequences. But then the
strangest thing happens: You notice that the writer has,
seemingly deliberately, omitted the very last word of the
first .

The missing word jumps into your consciousness almost


unbidden: ‘‘the very last word of the first paragraph.’ …

_____

OK, clever, but what does that do for us, really?


… It turns out that with enough training data and sufficiently deep neural
nets, large language models can display remarkable skill if you ask them
not just to fill in the missing word, but also to continue on writing whole
paragraphs in the style of the initial prompt. For example, when I gave
GPT-3 the prompt:

In 1958, Frank Rosenblatt introduced the Perceptron, an early version of


a neural net that could only detect the location of a square on a printed
card. Fifty years later, neural nets are capable of . . .

The software dutifully completed the thought, and then continued on,
picking up on the historical framing of the initial text:

. . . detecting and classifying faces, finding patterns in data, and


emulating the human brain. While the neural net matured in academia, it
also found its way into the tech industry. In the late 1980s and early 90s,
neural nets were used in speech recognition and character recognition
applications. The growth in computing power and the ability to collect
larger data sets allowed other neural net applications to emerge.

There is art to putting LLMs (and VLMs) to practical use, but they are
poised to revolutionise content creation (not to mention understanding it)

• Identification From finding the document you want to auto-summarisation


• Classification From guessing what you are looking for to spotting new categories
• Trending From tracking trending memes to explaining what they mean
• Sensing Feelings From sentiment detection to sentiment conveyance
• Detection From community detection to strategically walled-off worlds
• Translation From translating languages to shifting voices to make connections
• Summarisation From summarising a document to putting it in context
• Generation From chatbot conversations to complex discourses
• Explanation From testing theories to posing them
• Interpretation From modelling meaning to participating in social construction











Context: AI is an evolving family of methods, uses, issues
Types of Machine Learning What does it do? Skill, trust
issues?
Supervised Learning (SL) Predict, decide Data,
labelling, edge cases
Unsupervised Learning (UL) Simplify, find patterns Data, interpretation, intent
Self-supervised Leaning (SSL) Predict, decide Data, labelling/er,
testing
Reinforcement Learning (RL) Optimise, allocate Data, digital twin
fidelity, QA
Transformers (GAI) Predict / create content Data, guard
rails, oversight

Tips: Understanding the family tree is very helpful! As you learn more, think about both the
kinds of methods you find most interesting and (A) the kind of use case and domain you’re
interested in, (B) relevant data resources and data engineering opportunities and challenges,


We have now arrived at a
Post-Turing Frontier

https://doi.org/10.48550/arXiv.2304.12700

Like Turing, we start
with an existing game:
“Categories”

Under time pressure, generate


words that fit a list of categories,
all starting with a given letter. Scoring: for words
approved by majority vote, 2 points for each unique
word, 1 point for each word also listed by others.




The Participation Game

• Rules: Play “Categories” with 4-6 participants with at least one artificial and
all known from the start to be either human or artificial participants.
• Winning: Highest score wins after a pre-set time (e.g., 30 minutes)
or when one participant reaches a score threshold (e.g., first to 21).
• Strategy: Winners must be quick and show creativity in (1) reinterpreting
categories and (2) the lively voting arguments.
• Interfaces: Evolve from text chat (level 1) to audio conversation (level 2),
video chat (level 3), VR or AR (level 4), humanoid robot (level 5).

This is already happening,


and it is a profound change

As humans develop languages and


symbol systems, they enable both
communication about what is and
persuasion about what will be.

These are the processes by which


we construct the ontologies we use
to catalogue, understand and make
sense of the world.

For phenomena that do not depend


on human awareness or assent,
social construction gives us handles
to talk about them. For more social
realities, social construction is what
gives rules and custom their power.

Virtual Influencers?
Rankings?

See virtualhumans.org

Lu of Magalu
32M followers

Having artificial intelligences participate in social construction will
require rethinking core concepts around which we build societies*

Influence
Agency
TRUST Accountability
Legitimacy
*Including this one!
Pets and
their people
(owners?)


Children and
their parents


Wards and their
conservators

Children and
their parents
Talent and
their agents


Supervising
Professionals
and Trainees

Fictitious persons and
their managers with
limited liability



Please, please,
by all means,
have a cheeky
peek behind
the curtain!

Smart, safe trust of


AI requires knowing
something about
how it works.


You may have heard,
things can get messy
Not just messy, but also
pretty weird
Not just messy, but also
pretty weird
And maybe even
delusional, unless…
And maybe even
delusional, unless…
By knowing what is in
the language, a little
prompt engineering
really changes things

From simplistic rules
such as “It is important
for people to follow the
laws we have…” to
deeper insights such
as, “it is important to
have laws the people
will follow”

The Participation Game


A Post-Turing Frontier for
Generative AI Systems

So what can we say so far?

To make the most of generative


AI (and avoid getting
bamboozled), respect it enough
to learn how it works and use it
thoughtfully. Do not treat is as an
oracle, nor as a dumb tool.
Instead, be thoughtful about when
and how it may rise to be an
artificial participant in social
construction processes—a
synthetic partner in creating the
worlds we will live in.







Thank you!

You might also like