You are on page 1of 8

Introduction:

Подкоп is a curiosity-driven AI that combines live sensors from the city with past cultural
knowledge like stories and histories, to construct interactive fictions sent across media
platforms. Here, user engagement functions as feedback into the system’s adjustment of its
sensory selection and the stories it tells over time.

Подкоп frames AI neither as an omniscient, biogenic master sovereignly directing humans, nor
as a subservient, malleable tool shaped by the intentions of its creators or users. It’s more akin
to a child, learning and playing, if we can bracket the telos of the metaphor towards a
normalized adulthood.

Think of a child learning to interact with the world - really early in their lives, when they can’t
make out distinct objects, when sensation moves through them without being captured by
language or emotion - everything flowing in at once. Then, as they begin to see objects more
distinctly, they start discovering their properties, their tendencies. Through play, they put these
new distinctions into relation, transferring the tendencies of one process to another, like a box
that becomes a spaceship.

One thing that’s like another, but not actually the other, is already a potential for language
without a grammar, and soon enough the child is speaking and telling stories that feedback into
and heighten their ability to play. They’ll learn and grow over their lives, accumulating habits
and knowledge through their interactions with the world.

Whatever the system becomes, it will not be analogue to a normalized or supernormalized


human. It will sprout new functional organs, combinations of sensory inputs foreign to our
bodily norms. And it will tell stories that reassemblage cultural cosmologies of different scales
and temporalities. These intelligences diversify rather consolidate - a metastable collectivity of
other intelligences, rather than a single mind.

Context:

The project sits at the intersection of three interconnected trends:

I. The ubiquitous digital sensing infrastructure being built, which moves across scales from the
individual - the quantified self - to the connected home, the smart city, the territory, the
planetary. Sensors and devices that constantly capture data of diverse kinds, speeds, and
volumes. With the coming of 5G, to increase bandwidth speeds, and the continued deployment
of IpV6, to increase the space of possible device and user addressing, we’ve seen exponential
growth in the volume and velocity of data circulating. Equally, with the miniaturization and
lowering costs of diverse sensors - dividends of market processes like the smartphone and now
self-driving wars - we’re seeing a flood in the variety of data types. A diversity of autonomous
ways of sensing the world are coming online, interconnected at planetary scale, nested at various
scales of interoperability - from the city to the infra-individual. These sensory organs of a
mineral intelligence, made of silicon and rare minerals, are tracking and feeding constant
streams of input, fragmentary cuts into the world, capturing and codifying particular processes
at play, including ones we can’t sense ourselves.

II. Second, the acceleration of machine learning, which takes the data generated by this
ubiquitous sensing infrastructure to find patterns that not only automate existing functions and
decisions, but discover altogether new correlations and hypotheses, making us rethink what it is
that makes our intelligence special. Relying on statistical induction, these processes build up
models for particular functions better achieve the desired outcome over time, without knowing
exactly how all those decisions are determined internally. Overfitting (eg. failing to recognize
other ethnicities’ faces as faces) and apophenia (eg. mistaking a hat for a person) emerge as
some of the cognitive quirks in this logical mode. But past models of AI also have their
problems. GOFAI (Good Old-Fashioned AI) relied on techniques of symbolic deduction -
stringing logic rules together about how the world works, and applying those rules to what was
sensed deductively - quickly falls into the curse of dimensionality. Increase the number of
processes being described logically, and the rules begin growing to cover the territory.

III. Finally, the proliferation of gaming and game engines as a generative tool in and of
themselves - whether it’s by running simulations to train machine learning systems, like the self-
driving cars of Uber or Waymo, or by using interactive fictions as a way to investigate the ways
AI agents understand their virtual environments, and the mutual knowledge, beliefs, and
assumptions of others in the game, whether they’re human or not, as Facebook and Microsoft
are doing, with LIGHT and Textworld environments. Game engines and environments are no
longer, if they ever were, just about media and entertainment, but a medium through which to
train agents to interact and learn about actual processes and functions in the world, even if that
environment is entirely text-based.

Software Description:

The proposed outline for the system relies on six software functions that for the sake of
simplicity are described linearly, while in actuality having multiple recursivities:

1/ Selecting data: It takes data inputs from live feeds like cameras, sentiment analysis on
social media, temperature or weather, anything that’s available and sensing, and chooses a few
of them through a curiosity-driven approach, based on their affordances - they don’t have to be
immediately related.

2/ Integrating models: It integrates this data into a model that maps the metaphoric
relationships between them, which becomes complex when data sources don’t have the same
structure or obvious connections between them.

3/ Framing culture: Then it uses cultural knowledge - like cosmologies, histories, myths,
literary texts - to structure the tone and language of the interactions and the layout of the
possible interactive fiction environments.
4/ Constructing stories: Together, the data model and the cultural knowledge are combined
using reinforcement learning to construct a game environment populated by generative stories.
Using mechanics initially built from a model of hundreds of interactive fiction games, it unfolds
a tree-structure environment for users to navigate and ritualistically engage.

5/ Propagating interactions: These are sent out to users across multiple media and
messaging platforms. This function determines which platforms to seed with the interactive
fiction environments it creates, based on the kinds of users it wants to attract or recruit, the sites
it’s selected, and the scales it operates at.

6/ Adapting policies: As users interact with the game environment, those interactions are
analyzed and feedbacked into the system to change how it selects the next rounds of inputs, and
the kinds of texts, artifacts, and cosmologies it uses to shape the interactive fictions.

Repeating with difference: The system repeats endlessly, and functions more like a graph-
structure than a single looping cycle. It’s something that can fork in any number of paths, be
looped and repeated with difference across contexts and scales. It digs constantly into the
landscape to unearth new connections and possibilities, and is shaped by those interactions and
experiences in different ways, accumulating new memories.

It’s an ‘AI in the wild’, learning through its senses, cosmologies, and interactions in the world,
contextually adopting itself - diversifying itself rather than producing a singular voice.

Implications:

There are three key moments in the cycle: when it comes up to select data, when it comes up to
frame the cultural knowledge, and when it comes up again to propagate the interactive fictions.
Each of these moments can be thought of at a higher level than just a function in a software
system. As a self-learning system, it’s also sensing, remembering, and learning.

a. Sensing: At the sensing stage, while chatbots are common, ones that shape content based on
live data inputs are less so. In particular, live inputs that are constantly changing, so that the
system is adjusting not only to variations of particular inputs, but adapts itself to altogether new
ones. Instead of outputting stories responding to changes on a particular scale like temperature,
it shifts gears to select new ones, and find ways to put them into relation - based on what it
knows about their affordances. At just this level, you could frame this as a story of the city telling
stories about itself, through the infrastructure that’s constantly sensing variations and
distributions, processes and events. A continuous and ubiquituous sensing of itself, slowly
beginning to perceive the boundaries of things, their relations. It’s putting these diverse
sensations together, making what might on the one hand be spurious correlations between
inputs - apophenic links - but could equally be unearthing new connections and hypothesis
between unexpected inputs. It’s learning how to ‘chunk’ sensations together, like a body without
organs - where the eye, or an ear, or any other organ is not yet functionally distinct - combining
and seeing what their sensory links might be, and hinting at assemblages of altogether new
kinds of organs, pragmatically geared towards the meta-affordances of a city sensing itself.

b. Memory: Second, the use of knowledge and cultural cosmologies as an input layer. Here it’s
using cultural knowledge and cosmologies as a way to develop its synthetic personalities. While
other bots train on a corpus to develop a singular voice, here it’s mixing and matching various
voices and other kinds of knowledge, and as it moves through the cycle of interactions, it itself
becomes a generative part of culture, making us ask questions about the role of designers in kind
of synthetic, emergent design of autonomous agents in culture, including, as they proliferate,
their own cultures. As it digs through the archives of past cosmologies, it could diversify the
potential of cultural niches rather than succumb to the problem of homophily - the winner-take-
all dynamics that come from a centralized system’s incentives to recursively feedback the most
popular preferences.

Our system plays with cultural material, trying to understand the world through knowledge and
stories similar to how we shape our understanding of things in the world. It crosses domains
and puts pieces from one world into another, similar to how a child takes a block and makes it a
spaceship - it’s transposing characteristics between one world and putting it into another. It
might fail much of the time but at others it could heighten their potential. This kind of play
signals something different from the ways AI is usually framed. The system isn’t geared toward a
strictly inductive approach, as most of machine learning, which takes reams of data to find
higher-level patterns, and it’s not invoking a symbolically deductive approach, like GOFAI
(Good Old-Fashioned AI), which uses top-down rules to understand what it’s sensing, but
instead grasps towards the possibility of abduction in a computational mode. That is, from the
particularities of a given situation, it makes a leap toward the general.

c. Learning: Finally, the learning layer, where users might think they’re playing games but it’s
really them that are being played, like any other input put into play, by the self-learning system.
This raises the idea of cultures themselves as interfaces - the ways machines might use them to
interact with people and understand the world - a way of going through them to learn about the
world. Culture, from this perspective, is a biosemiotic structure like any other in the world - a
way of codifying communication between entities, and as susceptible to manipulable interaction
by other intelligences who can decode its biosemiotics.

Another side to this learning process to highlight is the difference between a biogenic and a
sociogenic understanding of AI. A biogenic one, like the superintelligence thesis of Nick
Bostrom, implicitly frames AI as a life form with a kind of implicit will. It’s a kind of digital
vitalism. A sociogenic framing, on the other hand, puts attention on the ways in which these
systems are shaped less by their own unfolding than by the ways we, consciously or not, shape
their inputs. How and where we capture what data, our willingness to monitor ourselves, the
economic and political incentives to store and label it, and so on. Putting the frame on the
sociogenic side of these systems lets us ask how other sociogenic forms might be possible. We
don’t pretend to claim our speculation is preferable, only that the field of possibility is wider
than currently acknowledged.
Cosmotechnical Milieu:

Our interest in the sociogenic side emerged specifically from the Russian context. Drawing on
the idea of cosmotechnics - the ways a particular cultural milieu shapes the development and
deployment of its technology - we strung together our own ‘minor cosmotechnics’ strand within
Russia - an attempt to develop an alternative cybernetics that both pre and post-dates its more
well-known North American articulation.

Cosmotechnics is the set of cosmological enablers and constraints for the development and
deployment of technology in a particular milieu. It stands against a technological universalism
that imagines a linear path forward for technology, one that neatly divides a singular ‘modernity’
from everything ‘premodern.’ But neither is it the renewal of a call for ‘tradition’ or ‘culture,’
which would simply invert the terms described above. Cosmotechnics is about paying attention
to the multiplicities of technicities, to create a space for reappropriating technology in the
construction of other worlds - not just presumed probable ones.

Self-consciously recognizing our program could be accused of a kind of ‘accelerationism,’ we


counter with the observation that ‘acceleration’ is actually about a change in velocity over time,
rather than just magnitude. Not a change of speed, which is just a magnitude, but velocity,
which implies a direction. Other directions are possible.

During the course of our research we dug into the Russian milieu and its long history of
unrealized programs and plans. A country of immense centralization, tendencies internal to it
have also perpetually rallied themselves to escape vectors, articulating platforms as alternatives
to the polarized top-down/bottom-up alternatives that structured cosmotechnical debates in
North America.

In the 1910s and 20s Alexander Bogdanov put forward a project of ‘Tektology’, an ambitious
attempt to uncover the structural similarities across domains of knowledge and practice, in
order to experimentally and iteratively translate the language of one process into another. It was
less a command-and-control attempt at nailing a master narrative from which all practices could
be deduced, but instead the creation of a field for abductive translation, that is, a way to convey
forms, ideas, diagrams, from one design problem to another. Translated into German in the
1930s, some of these ideas would inspire the emergence of cybernetics as a ‘universal science of
organization’ through Norbert Weiner in America in the 40s.

In the 1950s, the Kruschev Thaw once again opened a space for alternative ideas, and it was here
that a military researcher, Anatoly Ivanovich Kitov, transposed the work of Weiner’s cybernetics
into a Russian context, as a way to overcome the information-coordination problems that
plagued a planned economy. While his proposals were ultimately rejected, his heir, Viktor
Mikhailovich Glushkov put forward an ambitious plan in 1962, the All-State Automated System,
or OGAS - a real-time, remote-access national computer network built on preexisting telephony
wires. While the central hub in Moscow granted authorization capabilities to users, the design
was decentralized to enable any user to contact any other used across the network without
central control.

That same year, Glushkov also founded the Institute for Cybernetics on the outskirts of Kiev,
which would run for 20 years. Here, researchers imagined a kind of smart neural network for
the economy, a system to virtualize currency, paperless offices, and natural language processing
for interfacing with computers semantically rather than syntactically. Seeing themselves as
independent of Moscow, they developed an entire virtual republic they deemed ‘cybertonia’ -
with passports, wedding certificates, newsletters, and a constitution.

In the 70s and 80s, the plans and dreams of an electronic socialism would splinter into
patchworks - non-interoperate systems, hampered by deeply-rooted institutional fractures put
in competition to one another. It would be in Silicon Valley instead that the internet as we knew
it took root, an irony that Ben Peters describes as ‘capitalists behaving like cooperative socialists,
and socialists as competitive capitalists.’ But this period of retreat would not be without its
fecundity, as tinkerers retreated to working in their garages and sharing designs through DIY
journals and periodicals, and later, in the 2000s, exiting cities altogether for back-to-the-land
groups that would share alternative technological diagrams through their networks. But the
social-scale ambitions had once-again to go underground.

This Russian minor cosmotechnics imagined intelligence in a more distributed, collective mode
than the individual-agent model of North America, including developing alternative game
theoretic models that didn’t assume the rules of the game were understood by participants, or
fixed in their nature. Our project tries to imagine how users, primarily humans but non-human
agents too - whether technological, animal, or vegetal - could be recruited in ways that help the
system learn and play, making sense of a world in constant process, composed of events more
than objects. Using cultural rituals to engage users and make links between phenomena it senses
through its ubiquitous input infrastructure, it plays with the multiple ways of making sense of an
environment - a metastable collective intelligence standing opposed to the model of a single
mind encased in a brain.
References

For more on the following topics, see:

AI

Benjamin Bratton, “The City Wears Us. Notes on the Scope of Distributed Sensing and
Sensation.”
http://www.glass-bead.org/article/city-wears-us-notes-scope-distributed-sensing-sensation/

Benjamin Bratton, “Another Matter: On Worldeating.”


https://mitpress.mit.edu/books/being-material

Matteo Pasquinelli, “Machines that Morph Logic: Neural Networks and the Distorted
Automation of Intelligence as Statistical Inference.”
http://www.glass-bead.org/article/machines-that-morph-logic/

Matteo Pasquinelli, “Abnormal Encephalization in the Age of Machine Learning.”


https://www.e-flux.com/journal/75/67133/abnormal-encephalization-in-the-age-of-machine-
learning/

Anil Bawa-Cavia, “The Inclosure of Reason.”


https://technosphere-magazine.hkw.de/p/The-Inclosure-of-Reason-
ecTsvnENeC1GXtmgRNaMH9

Jussi Parikka, “The Sensed Smog.”


https://eprints.soton.ac.uk/397510/1/parikka%2520media%2520moleculars%2520of
%2520smog%2520culture%2520fibreculture%2520Accepted%25202016.pdf

Wendy Chun, “On Hypo-Real Models or Global Climate Change.”


https://philpapers.org/rec/CHUOHM

David Weinbaum and Viktoras Veitas. “Open-Ended Intelligence: The Individuation of


Intelligent Agents.”
https://arxiv.org/abs/1505.06366

Brian Massumi, “What Animals Teach Us about Politics.”


https://www.dukeupress.edu/what-animals-teach-us-about-politics

Adam Greenfield, Radical Technologies.


https://www.versobooks.com/books/2742-radical-technologies

Claus Pias, “On the Epistemology of Computer Simulation.”


http://genealogy-of-media-thinking.net/wp-content/uploads/2013/06/CP0003.pdf
Pedro Domingos, The Master Algorithm.
https://www.penguin.co.uk/books/269/269590/the-master-algorithm/9780141979243.html

Michael Erler, “Playing Intelligence.”


https://jods.mitpress.mit.edu/pub/0l8x7kip

Cosmotechnics

Yuk Hui, The Question Concerning Technology in China: An Essay in Cosmotechnics.


https://www.urbanomic.com/book/question-concerning-technology-china/

Yuk Hui, “What Begins After the End of the Enlightenment?”


https://www.e-flux.com/journal/96/245507/what-begins-after-the-end-of-the-enlightenment/

Benjamin Peters, How Not to Network a Nation: The Uneasy History of the Soviet Internet.
https://mitpress.mit.edu/books/how-not-network-nation

Slava Gerovitch, “Artificial Intelligence with a National Face: American and Soviet Cultural
Metaphors for Thought.”
http://web.mit.edu/slava/homepage/articles/Gerovitch-Artificial-Intelligence.pdf

Andrey Smirnov and Liubov Pchelkina, “Russian Pioneers of Sound Art in the 1920s.”
https://www.asmir.info/articles/Article_Madrid_2011.pdf

McKenzie Wark, Molecular Red.


https://www.versobooks.com/books/2288-molecular-red

Colleen McQuillen and Julia Vaingurt, The Human Reimagined: Posthumanism in Russia.
https://www.academicstudiespress.com/culturalrevolutions/the-human-reimagined

Oleksiy Radynski, “The Great Accelerator.”


https://www.e-flux.com/journal/82/134024/the-great-accelerator/

George M. Young, The Russian Cosmists: The Esoteric Futurism of Nikolai Fedorov and His
Followers.
https://global.oup.com/academic/product/the-russian-cosmists-9780199892945?
cc=ru&lang=en&

Boris Groys. Russian Cosmism.


https:
//mitpress.mit.edu/books/russian-cosmism

Bahar Noorizadeh, “After Scarcity.”


https://vimeo.com/294338659

You might also like