You are on page 1of 16

Culture, Theory and Critique

ISSN: 1473-5784 (Print) 1473-5776 (Online) Journal homepage: https://www.tandfonline.com/loi/rctc20

The calculation of meaning: on the


misunderstanding of new artificial intelligence as
culture

Mercedes Bunz

To cite this article: Mercedes Bunz (2019) The calculation of meaning: on the misunderstanding
of new artificial intelligence as culture, Culture, Theory and Critique, 60:3-4, 264-278, DOI:
10.1080/14735784.2019.1667255

To link to this article: https://doi.org/10.1080/14735784.2019.1667255

Published online: 18 Sep 2019.

Submit your article to this journal

Article views: 439

View related articles

View Crossmark data

Full Terms & Conditions of access and use can be found at


https://www.tandfonline.com/action/journalInformation?journalCode=rctc20
CULTURE, THEORY AND CRITIQUE
2019, VOL. 60, NOS. 3–4, 264–278
https://doi.org/10.1080/14735784.2019.1667255

The calculation of meaning: on the misunderstanding of new


artificial intelligence as culture
Mercedes Bunz
Department of Digital Humanities, King’s College London, London, United Kingdom

ABSTRACT
A wide range of different AI systems based on the promising
technology of machine learning has been implemented into
everyday life without further ado, at times supplanting and
delivering institutional decisions. Following Gilbert Simondon’s
analysis in On the Mode of Existence of Technical Objects, this essay
explores the contemporary technical objects of new Artificial
Intelligence systems to ask if their premature acceptance might
be based on a misunderstanding: AI systems are able to calculate
meaning, whereby they are performing a task traditionally rooted
in the sphere of culture. Are AI-informed technical objects,
because of their new ability of calculating meaning, mistakenly
being read as an ‘aesthetic object’ thereby creating the illusion of
being ‘integrated’ into the world? And how could their integration
be understood differently? The article contributes to studies
located at the intersection of work on Simondon and digital
technology, thereby traversing Science and Technology Studies
and Philosophy of Technology.

Critical thought seems to be easily cast aside when it comes to new technology develop-
ments in Artificial Intelligence such as machine learning. This is important to consider
as machine learning is a promising technology based on an alternative programming para-
digm: instead of an algorithm written by a programmer, the programmer now sets up a
framework that runs a data-analysis with the algorithm as an outcome. In other words,
the programming is done by a statistical model that analyses a large set of data without
explicit instructions but by finding patterns and inference in that data, and by categorising
it (LeCun et al. 2015: 436). Ground-breaking transformations to the set-up of such
machine learning frameworks in 2014 and 2015 led to advances in a wide range of
fields from speech recognition, visual object recognition, object detection to many other
domains such as drug discovery and genomics (LeCun et al. 2015). Soon after, the tech-
nology had been applied to quite diverse applications: from predictive text assistance in
smartphones to the identification of objects and faces for photos, the interpretation of
video-material linked to self-driving cars, the analysis of medical images for diagnosing
illnesses, the evaluation of performances recorded as data and in many more – a list
which shows that the technology of machine learning is being used in multiple sectors
and for a broad range of activities, some sensitive and others more playful (such as the

CONTACT Mercedes Bunz mercedes.bunz@kcl.ac.uk


© 2019 Informa UK Limited, trading as Taylor & Francis Group
CULTURE, THEORY AND CRITIQUE 265

machine learning programme AlphaGo, which won the Google DeepMind Challenge
match by 4:1 against 18-time world champion Lee Sedol [Silver et al. 2017]).
The list of such a wide variety of use cases for this technology is one among many points
that provides rationale for why it is sensible to approach machine learning through the
framework of Gilbert Simondon’s technical thinking in general, and his book On the
Mode of Existence of Technical Objects in particular. One of Simondon’s concepts in
this book is to avoid thinking of a technical object as something fixed: ‘the individual tech-
nical object is not this or that thing’ (1958: 26). This point, that will be discussed in more
detail further below, is clearly linked to the technical object that is machine learning; it
shows for example in the wide variety when it comes to the applications of its functioning.
Simondon also refuted to approach a technology from its use cases remarking that ‘we
reduce technical objects when we think of them starting from the way they are being
used’ (1958: XI). Being a technology that delivers different aspects of ‘intelligence’ in
the same way that electric energy delivers different forms of ‘power’ makes it difficult to
describe the technical object that informs the new Artificial Intelligence that is machine
learning in one single example. Comparable to the application of electricity (see Simondon
1982: 5), the intelligence applied by new Artificial Intelligence is both promising and risky.
While fully acknowledging the promising potential of new Artificial Intelligence, the aim
of this essay is to zoom in on an interesting phenomenon that can be found on its risky
side: the tendency to apply new Artificial Intelligence earlier than other technologies to
the real world, at times missing the step by which new technologies usually get critically
assessed – that of lab testing.
This phenomenon, that new Artificial Intelligence has been employed promptly and
widely, has been noticed by several scholars such as Adrian Mackenzie, who remarks in
his comprehensive study of Machine Learners: ‘Known by various names – machine
learning, pattern recognition, knowledge discovery, data mining – the field and its
devices … seem to have quickly spread across scientific disciplines, business and com-
mercial settings, industry, engineering, media, entertainment and government’ (2017:
1). The quick spread of new Artificial Intelligence systems is also observed by the AI
Now Institute, an interdisciplinary research centre dedicated to understanding the
social implications of Artificial Intelligence: ‘From criminal justice to health care to edu-
cation and employment, we are seeing computational and predictive technologies
deployed into or supplanting private and governmental decision-making procedures
and processes’ (AI Now Institute 2018: 3). Such a deployment of new Artificial Intelli-
gence without further ado results in the technology not being tested thoroughly before-
hand, which is leading to cases of injustice often related to machine bias: there has been
extended critique, for example, on automated procedures of criminal risk assessment, in
which new AI is used for calculating the future criminality of inmates (Angwin et al.
2016; Oswald et al. 2018); on public teacher employment evaluations causing good tea-
chers to lose their jobs (O’Neil 2017: 3–11); and on millions of everyday discriminations
through search results that favour white people (Noble 2018). Here, one can find the ten-
dency that new Artificial Intelligence systems are being put to the test through their
direct implementation into everyday life. Such forms of testing ‘are explicitly designed
to implement experiments in social settings’, as Marres and Stark (2018) remarked in
the introduction to their workshop ‘Put it to the test: critical evaluations of testing’.
Regarding new AI, these forms of testing have been called by the AI Now Institute a
266 M. BUNZ

‘rampant testing of AI systems “in the wild” on human populations’, which they explain
as follows: ‘Silicon Valley is known for its “move fast and break things” mentality,
whereby companies are pushed to experiment with new technologies quickly and
without much regard for the impact of failures, including who bears the risk.’ (Whittaker
et al. 2018: 8).
It is this tendency of Western societies to accept such a real-life testing of new Artificial
Intelligence that guides this essay. A tendency that becomes most explicitly apparent in the
real-life testing of self-driving cars, which knowingly have cost several lives without being
shut down: the first fatal victim of a self-driving car accident in 2016 was the 40-year-old
driver Joshua Brown, who died in an underride collision while his Tesla car was running
on ‘autopilot’ mode. Tesla allows their customers to switch the autopilot programme,
which at the time of writing had been in beta-version-mode for five years ever since it
launched in October 2014. While customers using the autopilot mode serve Tesla as
non-trained test drivers on their own will and with the knowledge of several fatalities
caused by the programme, the decision to take part in those tests in real life has not
always been voluntary. Among the fatal accidents linked to self-driving cars is also the
49-year-old Elaine Herzberg, who was the first pedestrian victim to become run over by
an autonomous driving Uber while crossing a multilane street with her bike in March
2018. While the car that caused the accident by misidentifying Herzberg had even been
supervised by a driver, a few days before the accident the governor of Arizona announced
that fully autonomous cars without anyone behind the wheel are now allowed to operate
on public roads (State of Arizona 2018). As a reaction after those fatalities, both Tesla and
Uber started to regularly publish safety reports with Uber acknowledging shortcomings
that contributed to the fatal crash. Examples such as these show that there is indeed a
‘rampant testing of AI systems … on human populations’ (Whittaker et al. 2018: 8).
Thus, one can curiously ask: what drives this premature acceptance for new Artificial
Intelligence systems deployed without sufficient safety, i.e., without much critical
thought? Where is such an acceptance coming from, and how could one explain such a
phenomenon? Why are ethical considerations and critical thought so easily disregarded
when it comes to this type of technology?
To explore those questions, Gilbert Simondon is an excellent thinker to fall back upon
when trying to clarify how the relevance of critical thought came to be diminished in what
is deemed by some to become the epoch of Artificial Intelligence. His thinking has by now
been applied many times to digital technologies that process and analyse data as his
approach is helpful to introduce a more complex understanding to digital operations
beyond their aspect of automation. In the following, this article will show this by reviewing
approaches turning to Gilbert Simondon in order to get a more nuanced understanding of
digital technologies. The paper then turns in a second step to the actual technical process
of machine learning, by roughly laying out how it calculates meaning for a better under-
standing. The fact that machine learning can calculate meaning, an area which so far has
been mainly rooted in the sphere of culture, will then lead to a third step and the question,
if and in what way the new capability actually repositions the mode of the technical object
or if it imitates a repositioning. For this, the paper turns to Simondon’s precise analysis of
the technical object and the aesthetic object, to show in a final step that there is a misun-
derstanding that might contribute to the premature integration of new Artificial Intelli-
gence systems shown above.
CULTURE, THEORY AND CRITIQUE 267

Digital technology and Simondon’s philosophy of technology


One of the core concerns in Simondon’s On the Mode of Existence of Technical Objects is
the fundamental misunderstanding of technical objects, which leads him to critique ‘ideas
related to automatism’ (1958: 17). This critique fits not just industrial machines but also
digital computers, the analysis of data, and even Artificial Intelligence, most explicitly in
the following quote:
The man who wants to dominate his peers calls the android machine into being … He seeks
to construct a thinking machine … We would like to show, precisely, that the robot does not
exist, that it is not a machine, no more than a statue is a living being, but that it is merely a
product of imagination and of fictitious fabrication, of the art of illusion. (16)

Against this fabrication, the point of opening up the black box of digital technology has
been taken up by various scholars.1 On a theoretical level, Yuk Hui’s (2016) comprehen-
sive analysis of ‘digital objects’ aims for a more complex understanding of digital systems
as places of relation against the misunderstanding of the digital as just immaterial or auto-
mated. Simon Mills (2015) points in a similar direction when using Simondon’s notion of
‘information’ to critique typical claims of Big Data as ‘reality mining’, which fail to account
for the ‘relations social systems have with each other and the environment’ (Mills 2015:
71). And recently Scott Wark (2019) conceptualised the ‘digital subject’ (following from
Goriunova 2019) with Simondon as a ‘technical entity’ that individuates by circulating
data, again foregrounding the relation we have with the digital.
Other scholars are linking theory to digital practice: Henning Schmidgen’s analysis of
‘Simondon’s Politics of Technology’ (2012) criticises the ‘black-boxed interfaces’ of our
digital devices. Schmidgen ends his essay with a call to open up that black box and to
embrace an ‘understanding and shaping of the material culture of contemporary societies’
(30). A few years later, Coté and Pybus (2016) put his claim into practice: to facilitate this
understanding, they initiated a mobile phone workshop (including teenage users as well as
hackers) as a practice-led opportunity to rethink the contested relationship between the
human and technology. Their hackathon workshop was also used to study the data collec-
tion of mobile Android devices thereby creating a different understanding and relation-
ship between the mobile phone owner and its technology, while at the same time
exploring the workshop as a method.
Approaches like these aim for a repositioning of digital technology as a more open
entity and against the misunderstanding of digital technology as automation; a misunder-
standing that continues with the technical object that is new Artificial Intelligence, as we
will see. This time, however, one can find beside the imagination of automation also a pre-
mature implementation of it. With Simondon, this paper asks therefore: what aspect might
cause the prompt acceptance of new AI systems? To answer this, the paper will now turn
to the technology that informs new Artificial Intelligence systems mostly, i.e., machine
learning. If the mode of existence of the contemporary technical object has changed, it
will likely have changed because of this new technical development that calculates
meaning. So what is this new technology capable of? In what way is its mode of existence
different, or is it more that the different treatment was built on a misunderstanding?

1
For reason of time and brevity, this overview will be concentrated on English language publications in the knowledge that
there are many further publications linking the digital and technology in French, German, Italian and other languages.
268 M. BUNZ

The calculation of meaning


Symbolic information packed with meaning has so far been a field in which algorithms
failed. Here, the new paradigm ‘machine learning’ has shown to be successful in parts.
The processing of complex symbolic information – what it is that is shown in an image
or what has been meant by a sentence – was a task which could not be sufficiently pro-
cessed digitally. The level of complexity of symbolic information could neither be sum-
marised in a set of rules written by programmers, nor could those rules (or the data) be
processed by algorithms in a sufficient speed. After years of frustration and testing, the
radically different approach of machine learning started to show promising results with
their statistical models calculating thousands of examples to infer probability rules from
those big sets of data. Around 2014 the break-through of this so-called ‘machine learning’
came with a digital system architecture known as ‘deep neural network systems’, which
allowed for ‘deep learning’. This deep machine learning became a ‘science (and art) of pro-
gramming computers so they can learn from data’ (Géron 2017: 10) with neural networks
identifying characteristics from the learning material that they process. Thus, ‘machine
learning can … be viewed as a change in how programs, or the code that controls compu-
ter operations, are developed and operate’ (MacKenzie 2017: 6).
It was this change of programming that opened up new areas of symbolic information:
digital technology that had excelled before in the areas of calculation or the management
of information and communication, was becoming more and more able to process
language and images successfully. As shown above, soon algorithms trained according
to machine learning principles became widely applied – from our phones suggesting
words when texting to medical applications offering a diagnosis after processing our
medical images. They also became widely written about – from academic introductions
for computer scientists (Goodfellow et al. 2017) to comprehensive science and technology
studies (Mackenzie 2017) to general introductory explanations of new AI systems in
support of critical reflections regarding its application (Bunz and Meikle 2018: 45–90;
Chun 2018). This attention is built on the fact that thanks to the statistical approach of
machine learning and its deep neural networks, the algorithms that had been created
through learning from data could now calculate the content of those sentences or
images – or in other words: it could calculate meaning.
Of course, one first needs to clarify what is meant when we say that meaning can be cal-
culated. The notion of ‘meaning’ is often linked to the area of culture, where one finds dis-
cussions about the production as well as the encoding and decoding of meaning (for
example, Hall 1980). It is within culture and the reading of culture where meaning has
been positioned in a central role as Raymond Williams showed exemplary in his compre-
hensive study of Culture and Society (1983). Following from Williams observations, the
use of the concept ‘calculation of meaning’ here is to signify that the programmes informed
by machine learning are able to perform a new task, which is the encoding and decoding of
language and images. In other words, the term ‘calculation of meaning’ signifies that
machine learning crosses over into the field of cultural symbols through the analysis of
languages, images, and other patterns. This calculation of meaning – meaning here
defined as ‘the purport or message conveyed by words, phrases; sentences; signs;
symbols, and the like’ (McArthur and Lam-McArthur 2018) – is repositioning technology
again, a technology that is always unfinished, incomplete and in progress (Simondon 1958:
CULTURE, THEORY AND CRITIQUE 269

133). If digital technology has started to enter the sphere of meaning, could it be that its pos-
ition is being shifted towards the sphere of culture? And might this be the reason for the
prompt acceptance of Artificial Intelligence systems into our world? To clarify this, one
needs to understand firstly the positioning of culture and technology in Western society,
a positioning of central concern to Simondon’s philosophy of technology that prominently
frames his study On the Mode of Existence of Technical Objects.

Simondon’s take on technology and culture


The ‘false’ opposition drawn between culture and technics is the first point Simondon pre-
sents in the introduction of On the Mode of Existence of Technical Objects (1958). It is of no
surprise that this issue and the relationship between culture and technology is being
further explored in texts such as ‘Psychosociologie de la technicité’ (1961) and ‘Culture
and Technique’ (1965) and gets taken up again much later in Simondon’s letter to
Derrida ‘On Techno-Aesthetics’ (1982). Here, Simondon’s ‘persistent experimentation’,
as Andrea Bardin (2015: 16) once aptly named his philosophical operation, also applies
regarding this topic. Even though On the Mode of Existence of Technical Objects focuses
on technical objects, the introduction makes clear that a reconfiguration of the false oppo-
sition between culture and technology is key as it is this opposition that leads to the tech-
nical object being misrecognised. Culture, states Simondon, does not allow the technical
object inside the cultural world of significations, on the contrary: ‘Culture behaves
towards the technical object as man toward a stranger, when he allows himself to be
carried away by primitive xenophobia’ (Simondon 1958: 16; see also Bunz 2014: 39–
42). The technical object, the machine, is treated with suspicion. However, inside this
stranger ‘something human is locked up’ (Simondon 1958: 16). ‘What resides in the
machines’, Simondon writes, ‘is human reality, human gesture fixed and crystallized
into working structures’ (Simondon 1958: 18). An awareness of this technical-human
reality is missing due to a non-knowledge of the nature and essence of technical objects
(19). Looking at the technical object from the philosophical perspective in On the Mode
of Existence of Technical Objects, this is the hope of Simondon, will allow the introduction
of the technical being into culture (21). And as the misunderstanding of the technical
object ‘new Artificial Intelligence’ as culture is the focus of this essay, it is important to
follow Simondon’s outline of thinking technology and culture in On the Mode of Existence
of Technical Objects, which gets analysed most explicitly in the third part of the book.
Overall, Simondon’s philosophical study of technical objects aims at introducing a
more complex understanding, for which its first two parts focus on ‘an awareness of
the nature of machines, of their mutual relations and of their relations with man’ (19),
while the last part analyses the specific relations of technical objects in light of other
relations, for example those of religious of aesthetic objects. This last part called ‘The
Essence of Technicity’ positions technical objects next to objects from other ‘cultural’
schema, which are religious, aesthetic, magical or philosophical. It has been called ‘key
to understanding what Simondon really means by “technical culture”’ (Combes 2013:
61) whilst being acknowledged at the same time as a ‘paradox’. and ‘genuinely speculative’
(Bardin 2015: 167).2 For reasons of brevity, the following analysis will not take all different
schema discussed in that book into account (for an overall analysis of the third part, see for
example Bardin 2015: 165–216; Combes 2013: 61–63; and Barthélémy 2011). Given the
270 M. BUNZ

fact that this essay is interested in the misunderstanding of new Artificial Intelligence by
exploring if the calculation of meaning might have changed – or did it just obscure? – the
positioning of the technical object that is machine learning, the next section will focus
mainly on Simondon’s differentiation of the technical and the aesthetic object.

The technical object and the aesthetic object


Simondon’s description of the technical and aesthetic object is a precise positioning. In
sum, both objects are described as a mediation between the human and the world,
although their mediation functions in disparate ways.
The technical object is first and foremost defined by its specific distance from the world:
‘The availability of the technical thing consists in being liberated from the enslavement to
the ground of the world’ (Simondon 1958: 183, emphasis added). He points out further: ‘
… in technics the whole of reality must be traversed, touched, and treated by the technical
object, detached from the world and applicable to any point and at any moment’ (183,
emphasis added).
This detachment makes the technical object easily applicable to many points and many
moments. Thus, its detachment fills it with a certain force: by being detached the technical
object gains a capability. This works in a two-sided way: its detachment entails a certain
liberation ‘from the enslavement to the world’ (1) due to which it is also able to violate that
world for better or worse (2). In Simondon’s words: ‘Technical activity constructs separ-
ately, detaching its objects, and applying them to the world in an abstract and violent way’
(195, emphasis added). It is this force of technology – at once liberating and violating –
that turns the technical object into one that has the power to intervene. Using the force
of being able to apply itself to the world in an abstract and violent way, the ‘technical
object’, writes Simondon (183), ‘intervenes as a mediator between man and the world’.
When turning now to the relation of the aesthetic object to the world, one can quickly
see that it operates in a very different way. It is here worth quoting Simondon in full:
Technical activity constructs separately, detaching its objects, and applying them to the world
in an abstract and violent way; even when the aesthetic object is produced in a detached way,
as a statue or a lyre, this object remains a key-point of a part of the world and of human
reality; the statue thus placed before a temple is what makes sense for a defined social
group, and the mere fact that it is placed, in other words that it occupies a key-point that
it uses and reinforces but does not create, shows that it is not a detached object. (195)

For Simondon, the aesthetic object is the opposite of being detached as it is integrated into
the world: ‘It is indeed integration that defines the aesthetic object, and not imitation’,
writes Simondon (195). By stressing the aspect of its integration, Simondon’s approach
towards the aesthetic object goes even further. Swapping integration with imitation, he
is not only refuting the classic aesthetic theory of mimesis but also relocates the
moment of beauty from the object to our encounter with it: ‘It is never the object strictly
speaking that is beautiful; it is the encounter’, he writes and repeats two pages later: ‘Real
aesthetic feeling cannot be enslaved to an object’ (202–204)
2
Bardin (2015: 167) reports that this section of the book has been the one ‘rarely taken into consideration by the critique’
even though it is supposedly the one Simondon was most attached to according to Simondon’s son (Bardin cites Hottois
1994: 118).
CULTURE, THEORY AND CRITIQUE 271

The integration of the artwork, however, is also not straightforward. For Simondon, an
aesthetic object is defined by two links: ‘The aesthetic work is … linked to the world and to
man’ and ‘also linked to other works’ ‘as a unique intermediate reality’ (200). These two
lines or anchors – being linked to the world and having links to other works of art – define
its characteristics as it is ‘characterized by the possibility of passage from one work to
another according to an essential analogical relation’ (200).
Here, we encounter a very different mode of existence of being in the world than the
one that could be found with the technical object. There is no violence, but instead ‘an
essential analogical relation’ – ‘analogy’ being for Simondon: ‘the foundation of the possi-
bility of going from one term to another without a negation of the term by the succeeding
one’ (200–201, emphasis added). This capacity of an analogy without a negation enables
the work of art to establish links with elements of the world, whereby it integrates itself
into world. Or in Simondon’s words: the ‘work of art re-establishes a reticular universe
at least for perception’ (192), and this ‘aesthetic universe is partial, integrated, and con-
tained in the real and actual universe’ (192, emphasis added). And with this point,
finally the particular force of the aesthetic object comes to the fore. Its specificity is to
be contained in the real and to remain integrated into the world, while at the same
time establishing a reticular universe through this integration through which the new
emerges: ‘Art is that through which a new reticulation emerges … and as a consequence
of this new reticulation there is the emergence of a real universe’ (204, emphasis added).
A point he already made ten pages earlier when saying about ‘aesthetic reality’ that it ‘is
a new mediation between man and the world, an intermediate world between man and
the world’ (194, emphasis added).
But let us take a step back and compare both positionings, that of the aesthetic object
with that of the technical object. Both introduce an aspect of intermediacy; they are both
characterised as being intermediate. The specificity with which they produce this interme-
diacy, however, remains fundamentally different. The aesthetic object is linked to an emer-
gence of a new reticulation, an intermediate world. It creates a reality as a new mediation
between man and the world through ‘an intermediate world’. The technical object is
located in a fundamentally different position: it is not an intermediate world, but an inter-
mediate tool, which ‘intervenes as a mediate between man and the world’. Other than the
aesthetic object, it is not linked to the world but is necessarily detached. Other than the
aesthetic object, it does not create but it intervenes. Thus, one can see that according to
Simondon’s definition, both objects are intermediates, while they show very different ten-
dencies as to how to implement their intermediate capacity.
Now that we have established the different positionings of the technical and aesthetic
object, we can return to our initial question: how is it that the force that characterises all tech-
nical objects – being detached and liberated from the world, a force which allows technology
to apply to it in a violent way and thereby to intervene between man and the world – often
seems disregarded when it comes to new Artificial Intelligence systems? Is the reason for the
‘rampant testing of AI systems “in the wild” on human populations’, as the AI Now Institute
called it, that the technical objects that use machine learning techniques and calculate
meaning are assumed to be of another character as other technical objects? To answer
this question, the next section will study if AI systems by calculating meaning are in fact
coming closer to the sphere in which we traditionally position meaning, i.e., closer to
culture. Could AI systems be rightly treated as an aesthetic object?
272 M. BUNZ

The mistaken identity of new Artificial Intelligence


Above, this essay stated that new Artificial Intelligence has become much better in analys-
ing language as well as images than ever before, and that accordingly one could say: if new
Artificial Intelligence systems are able to do this, they have entered the plane of meaning.
What has not been discussed, however, is in what way this entrance onto the plane of
meaning has happened and is happening. So far, this essay described the abilities of
new Artificial Intelligence systems to analyse images and/or language as the ‘calculation
of meaning’ thereby seemingly shifting the position of technology towards the sphere of
culture – at least in parts. Historically, the analysis of meaning has been a human (or
living being) dominated sphere – not anymore. The next aspect that needs to be looked
at here is now in what way machine learning is calculating meaning. Is it understanding
what is meant with an image or sentence? In what way are new Artificial Intelligence
systems making their entrance onto the plane of meaning?
At first sight there seems to be no difference anymore. Tests have shown that by now
both – human and machine – are able to correctly identify the objects displayed on an
image, as over the years recognitions systems tested, for example, in the ImageNet
Large Scale Visual Recognition Challenge which started in 2012 became better and
better – by now an accuracy of over 95 per cent in the competition is not unusual.
When the computer scientist Andrej Karpathy (2014) tested himself as an example for
the recognition accuracy rate of humans, he had a classification error rate of 5.1 per
cent; the reason for this is that some images can show specific breeds of dogs which a
trained neural network is more familiar with than the non-dog expert a computer scientist
such as Karpathy usually is.
The way of obtaining such a high level of accurate identifications, which are on eye level
with the identifications of humans, however, remains machine specific. Artificial Intelli-
gence systems specialised for object recognition in images do identify objects depicted
in an image in a very particular way: they record the pixel formations i.e., edges and tex-
tures of an image, and its shades and different regions of colour, to then calculate statisti-
cally the highest possibility what those formations of edges might illustrate. That means
that they first statistically count edges and colour shades and other patterns, to then
link their statistical findings to possible identifications that the system learned from a
data set of correctly labelled data (this is for the case of so-called ‘supervised learning’).
By linking their statistical findings, they receive the label with the highest probability –
that what can be seen in an image (a tabby cat) is classified as a ‘cat (94%)’ but it could
also be a ‘dog (36%)’ and less likely a ‘duck (2%)’ (example taken from Shanmugamani
2018).
To summarise the above procedure: there is no ‘understanding’. In new AI systems,
meaning is not understood but calculated, which is a fundamentally different approach
from an integrated understanding of meaning. Its approach remains technical and
detached, instead of an ‘integrated’ approach of contextual understanding which Simon-
don described for an aesthetic object. While machine learning is highly effective in analys-
ing images, language and data, it remains an intelligence that operates fundamentally
differently from the human intelligence of understanding. There is no perceiving of ‘the
intended meaning’. New Artificial Intelligence systems establish meaning through calcu-
lation; they have no concept of what might have been intended. This becomes further
CULTURE, THEORY AND CRITIQUE 273

apparent, when one looks at recent research that has experimented with cases in which
those systems and their identifications have been misguided (Geirhos et al. 2018).
When studying different contemporary system architectures such as AlexNet, VGG,
GoogleNet, or ResNet50 trained on a large dataset ‘Stylised ImageNet’, computer scientists
from the University of Tübingen (Geirhos et al. 2018) realised that those models could be
easily confused. The reason: the current analysis of images by new AI systems is heavily
biased towards texture. When creating images with a texture-shape cue – such as a cat
shape with an elephant skin texture instead of fur – the identification failed. This shows
how systems identify: they approach an image via its texture, which means they struggle
to identify more abstract, larger shapes. A cat with an elephant texture was an elephant for
the AI programme, while it still was a cat to humans (Geirhos et al. 2018: 2). This exper-
iment again confirms that the calculation of meaning we find with new AI remains fun-
damentally different from an understanding of meaning.
With Simondon one could say that those models are still ‘detached from the world’,
because the meaning they calculate remains linked to statistical models and not linked
to the meaning of the world. In other words, AI informed technical objects can now cal-
culate meaning and this technical evolution has opened a door that has so far been closed;
still, this does not mean that these objects are integrated in the same manner as an aes-
thetic object integrating the world around – they remain technical. What we are facing
is therefore a case of a mistaken identity: new AI systems are imitating the understanding
of meaning by calculating it, but they are not understanding – they lack the ability to link
their classifications in an integrated way to a wider, constantly shifting context. Their cal-
culations are not integrated into the world but remain detached.
Interestingly, this mistaken identity of technical objects is nothing new for a technical
object from a Simondonian perspective – the phenomenon has already described in On the
Mode of Existence of Technical Objects: while in principle, there can be a transition
between the technical and the aesthetic object – technical objects ‘have an aesthetic
value’ (196) when they are ‘integrated into the natural or human world’ (199) – in practice
the aesthetic object is often ‘enveloping and masking the technical object’. This mask does
not find Simondon’s approval: ‘Every disguise of a technical object generally produces the
uncomfortable impression of a fake and appears like a materialized lie’ (196). Exactly this
aspect is taken up again in ‘Psychosociologie de la technicité’ (1961: 37) when Simondon
describes ‘the obligation of the technical object to wear a veil or a disguise to penetrate the
cathedral of culture’. This is exceptionally the case for what he calls crypto-technical
objects, objects that simulate to be something they are not such as a fake fireplace that
only simulates a flickering fire. The other type of technical object is described by Simon-
don as a phanero-technical object, which does the opposite: aesthetically, it is organised
around a technical element such as the tractor around its strong engine or the sport car
exhibiting proudly its exhaust (38–39). The case of new AI misunderstood as an aesthetic
object can be related to these observations, although new AI draws on other aspects of aes-
thetic objects.
While Simondon described the disguise of the technical object to enter the ‘cathedral of
culture’ (1961: 37) as a material one, i.e., as a technical object that bowed to aesthetic stan-
dards that had no technical function, new Artificial Intelligence is taking a different route.
What it delivers seems to imitate an understanding of our world, a task that so far has been
exclusively linked to aesthetic objects which produce meaning by being integrated into our
274 M. BUNZ

world. Imitating their understanding through calculation is the reason why the technical
object that is new Artificial Intelligence has been mistakenly regarded as also being inte-
grated into our world. In a similar way to the crypto-technical object it simulates an aes-
thetic function it does not have: an understanding of the world that rises from being
integrated.

Conclusion and final thoughts


In order to understand how it can be that ethical considerations and critical thought have
been easily disregarded when it comes to new AI systems, the inquiry of this article has
admittedly linked together disparate areas: Simondon, machine learning, technical
descriptions, critical thought and the question of why and how we mistake AI for an aes-
thetic object. Being aware that the linking those areas can be at times quite confusing, the
following section will try to show the thread that runs through this argument once more,
before it turns to the conclusion.
Gilbert Simondon’s detailed exploration of the meaning of technical objects has pro-
vided important clues for the immediate integration of new AI systems based on
machine learning in the midst of our society seemingly shunning aside critical thoughts
about them. Guided by his approach, this paper has explored the phenomenon of such
a premature acceptance, which is making those systems appear integrated into this
world before being put to the test – as if the mode of existence of technical objects,
which Simondon described as ‘detached’ from the world and as something that turns to
the world ‘in an abstract way’, had changed.
By analysing the functioning of new AI systems informed by machine learning, the
essay pointed to aspects that could be linked to this phenomenon: statistical models in
general and the machine learning approach in particular allowed computer programming
to enter the plane of meaning, in which it could not operate sufficiently successful before.
Now, a different way of programming allows to process the uncertainties and complexities
that are typically the case when categorising language or images to learn what might be
meant in sentences and what might be depicted on images. At the same time, this
inquiry could show that in order to identify meaning, pattern recognition has become
the principle from which it evolved and on which its identification relied on. Machine
learning models that are, for example, recognising images, are biased towards texture
and rely on small entities such as edges from dark to light and light to dark; they calculate
those statistically in order to infer correct meaning, as a recent experiment showed
(Geirhos et al. 2018). While the technical process of calculating meaning is a detached pro-
cedure that is following its own, very specific, technical logic, the outcome of this pro-
cedure can imitate successfully the understanding of meaning of living beings. And it is
this imitation of understanding meaning that allowed the technical object to become
linked to a plane that had been closed to it before – the plane of meaning predominantly
rooted in culture, to which the technical object now seemed to have come closer.
Turning here to Simondon’s comparison of the positioning of the technical and the cul-
tural, aesthetic object, combined with a close reading of the principles of machine learning
then allowed to see, that the calculation of meaning is mistaken for an understanding,
while the process of designating the meaning could for both cases not be more
different. Or in other words: even though both the technical and the aesthetical object
CULTURE, THEORY AND CRITIQUE 275

are linked to the identification of meaning, they could not arrive at that meaning in a more
different way. Still, the imitation of understanding by the technical object that is machine
learning seems to lead to the technical object to being welcomed as if it is integrated in to
our world: it is mistaken for a cultural, aesthetic object instead of being a detached one that
turns to the world in an abstract and violent way. Taking on a Simondonian perspective
makes their confusion understandable, and with it the premature integration of new AI
systems.3 Turning to Simondon, the inquiry could show that the technical and the aes-
thetic object are sharing the same aspect of being intermediate. However, the aesthetic
object and its intermediacy operates by being deeply integrated into the world; the tech-
nical object, on the other hand, is detached from the world; it is an intermediate tool.4 The
intermediate tool of new AI has the capability of calculating meaning, but it remains a tool.
This tool allows an imitation of understanding, but not an actual ‘understanding’ – its
meaning remains calculated. To not acknowledge this means that Artificial Intelligence
systems are being misunderstood as ‘understanding’ and ‘integrated’, and it is this
which allows them to be implemented prematurely.
Understanding the calculation of meaning by new Artificial Intelligence as a technical
object would question its integration thereby bringing to the fore its exceptional capability
and with it an aspect much more interesting: that its ‘intelligence’ does not function in the
same way as the one we know from living beings. What is their specific technical logic?
How does this other way of being intelligent work?
Instead of curiously asking such a question, the contemporary approach is to cover up
their ‘other’ intelligence in black box systems as the focus is on the capability of new Artifi-
cial Intelligence for ‘automation’. Here, a Simondonian reading of new Artificial Intelli-
gence systems in future research could help to shift this. Approaching new Artificial
Intelligence as a technical object allows a much more open approach – and for Simondon
the openness, the ‘margin of indeterminacy’ has been deeply linked to ‘a progressive per-
fecting of machines’ (1958: 17). Future work could use Simondon to develop a thinking
that aims to opening up the black box (Schmidgen 2012) of new Artificial Intelligence
to bring out existing and new relations (Mills 2015; Coté and Pybus 2016). Instead of a
black box, there is certainly room for more complex interfaces than the final decisions
most new AI interfaces have currently on offer. Interestingly, Simondon (1958: 135)
himself also already touches on the topic of Big Data, when discussing the collaboration
of technical and human memory regarding magnetic tape recordings that are ‘capable
of retaining monomorphic documents that are very complex, richly detailed, and
precise for a very long time’; remarks that focus on the very specific skill technical
digital memories have on offer. Understanding the specific skill of the statistical intelli-
gence that informs machine learning would then lead away from debates that feart the

3
One could consider if such a premature integration is not generally an aspect typical for digital technology and therefore
an effect on a larger scale. In the text ‘Technical Mentality’ (1961), Simondon explores the genesis of technical culture by
describing different technical stages such as the artisanal modality, the industrial modality and the network modality,
each following a re-constellation of two ‘sources’: ‘energy’ and ‘information’ (5). Analysing these aspects could help to
understand further why and how the ‘technical reality’ becomes often misunderstood as actual reality, which Brian
Massumi also notices when talking about a ‘shift toward a world integrally reshaped – culturally, socially, and economi-
cally – by digital technologies’ (Massumi 2012).
4
There are cases in which new Artificial Intelligence is being used to create aesthetic objects such as the neural network-
generated nude portraits, which artist-programmer Robbie Barrat calculated using progressive growing of Generative
Adversarial Networks.
276 M. BUNZ

loss of a human skill through its automation and towards the collaboration between an
intelligent machine with a differently intelligent human. How can those two different
ways of being intelligent work best together? Of what new ways of being intelligent are
they together capable? Questions like these could help answer the call of some experts
to link human autonomy more strongly to artificial autonomy (Floridi and Cowls
2019). And last but not least, it would be a way of moving beyond the current
misunderstanding.

Acknowledgements
I owe many thanks for this essay, starting with Conor Heaney’s and Iain Mackenzie’s invitation to
the conference Culture & Technics: The Politics of Simondon’s Du Mode at the Centre for Critical
Thought, University of Kent. The piece has also been thoroughly informed by the work and con-
versation with Noortje Marres, and her inspiring workshop ‘Put it to the test’ (together with David
Stark in London, December 2018), a workshop that has been tremendously important for this text.
Many thanks to the reviewers profound but also constructive feedback that helped shaping this
essay, and to the careful and thorough edits and suggestions by the journal’s editor Christopher
C. Barnes. And last but not least to my Macbook for bearing with me and all the PDFs it had to
open and process.

Disclosure statement
No potential conflict of interest was reported by the author.

Notes on contributor
Mercedes Bunz is Senior Lecturer in Digital Society at the Department of Digital Humanities, King’s
College London. Her last books are: The Internet of Things (Polity 2018) written with Graham
Meikle; and the open access publication Communication (University of Minnesota Press/meson
press 2019), which discusses how machine communication has changed the notion of communi-
cation written with Finn Brunton and Paula Bialski. She is a member of the international and inter-
disciplinary Research Network for the Critical Humanities, Terra Critica.

ORCID
Mercedes Bunz http://orcid.org/0000-0003-2876-0522

References
AI Now Institute. 2018. ‘Litigating Algorithms: Challenging Government Use of Algorithmic
Decision Systems’ [report]. New York: AI Now Institute. Available online at https://
ainowinstitute.org/litigatingalgorithms.pdf (accessed 6 September 2019).
Angwin, J., Larson, J., Mattu, S., & Kirchner, L. 2016, May 23. ‘Machine bias’. ProPublica. Available
online at https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-
sentencing (accessed 6 September 2019).
Bardin, A. 2015. Epistemology and Political Philosophy in Gilbert Simondon: Individuation,
Technics, Social Systems. New York: Springer.
Barthélémy, J.-H. 2011. ‘Quel mode d’unité pour l’œuvre de Simondon?’. Cahiers Simondon 3, 131–
148.
CULTURE, THEORY AND CRITIQUE 277

Bunz, M. 2014. The Silent Revolution: How Digitalization Transforms Knowledge, Work,
Journalism, and Politics Without Making Too Much Noise. Basingstoke: Palgrave.
Bunz, M. and Meikle, G. 2018. The Internet of Things. Cambridge: Polity.
Combes, M. 2013. Gilbert Simondon and the Philosophy of the Transindividual. Cambridge, MA:
MIT Press.
Coté, M. and Pybus, J. 2016. ‘Simondon on Datafication. A Techno-Cultural Method.’ Digital
Culture & Society 2:2, 75–92.
Chun, W. 2018. ‘Queerying Homophily’. In C. Apprich, F. Cramer, W. Chun and H. Steyerl (eds),
Pattern Discrimination. Meson press and University of Minnesota Press, 59–97. Available online
at https://meson.press/wp-content/uploads/2018/11/9783957961457-Pattern-Discrimination.
pdf (accessed 6 September 2019).
Floridi, L. and Cowls, J. 2019. ‘A Unified Framework of Five Principles for AI in Society’. Harvard
Data Science Review. Available online at https://doi.org/10.1162/99608f92.8cd550d1 (accessed 6
September 2019).
Geirhos, R., Rubisch, P., Michaelis, C., Bethge, M., Wichmann, F. A., and Brendel, W. 2018.
‘ImageNet-Trained CNNs are Biased Towards Texture; Increasing Shape Bias Improves
Accuracy and Robustness’. Available online at https://arxiv.org/abs/1811.12231 (accessed 6
September 2019).
Géron, A. 2017. Hands-on Machine Learning with Scikit-Learn and TensorFlow. Sebastopol, CA:
O’Reilly Media.
Goodfellow, I., Bengio, Y. and Courville, A. 2017. Deep Learning. Cambridge, MA: MIT Press.
Goriunova, O. 2019. ‘The Digital Subject: People as Data as Persons’. Theory, Culture & Society.
Available online at https://doi.org/10.1177/0263276419840409 (accessed 6 September 2019).
Hall, S. 1980. ‘Encoding/Decoding’. In S. Hall, D. Hobson, A. Lowe and P. Willis (eds), Culture,
Media, Language: Working Papers in Cultural Studies, 1972–1979. London: Hutchinson, 128–138.
Hottois, G. 1994. ‘Gilbert Simondon entre les Interfaces Technique et symbolique’. In F. Tinland
(ed), Ordre Biologique, Ordre Technologique. Seyssel: Champ Vallon, 72–95.
Hui, Y. 2016. On the Existence of Digital Objects. Minneapolis: University of Minnesota Press.
Karpathy, A. 2014. ‘What I Learned From Competing Against a ConvNet on ImageNet’. Andrej
Karpathy blog. Available online at http://karpathy.github.io/2014/09/02/what-i-learned-from-
competing-against-a-convnet-on-imagenet/ (accessed 6 September 2019).
LeCun, Y., Bengio, Y. and Hinton, G. 2015. ‘Deep Learning’. Nature 521:7553, 436–444. Available
online at https://www.nature.com/articles/nature14539 (accessed 6 September 2019).
Mackenzie, A. 2017. Machine Learners: Archeology of a Critical Data Practice. Cambridge, MA:
MIT Press.
Marres, N. and Stark, D. 2018. ‘Put It To the Test: Critical Evaluations of Testing’ [written work-
shop introduction].
Massumi, B. 2012. ‘‘Technical Mentality’ Revisited: Brian Massumi on Gilbert Simondon’. In A. De
Boever, A. Murray, J. Roffe and A. Woodward (eds), Gilbert Simondon: Being and Technology.
Edinburgh: Edinburgh University Press, 19–36.
McArthur, T. and Lam-McArthur, J. 2018. ‘Meaning’. In The Oxford Companion to the English
Language. Oxford: OUP. Available online at http://www.oxfordreference.com/view/10.1093/
acref/9780199661282.001.0001/acref-9780199661282-e-770 (accessed 6 September 2019).
Mills, S. 2015. ‘Simondon and Big Data’. Platform: Journal of Media and Communication 6, 59–72.
Noble, S. U. 2018. Algorithms of Oppression: How Search Engines Reinforce Racism. New York: NYU
Press.
O’Neil, C. 2017. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens
Democracy. New York: Broadway Books.
Oswald, M., Grace, J., Urwin, S. and Barnes, G. C. 2018. Algorithmic Risk Assessment Policing
Models: Lessons from the Durham HART Model and ‘Experimental’ Proportionality.
Information & Communications Technology Law 27:2, 223–250.
Schmidgen, H. 2012. ‘Inside the Black Box: Simondon’s Politics of Technology’. SubStance 41:3, 16–31.
Shanmugamani, R. 2018. Deep Learning for Computer Vision. Birmingham: O’Reilly and Packt
Publishing.
278 M. BUNZ

Silver, D., Schrittwieser, J., Simonyan, K., Antonoglou, I., Huang, A., Guez, A., Hubert, T., Baker, L.,
Lai, M., Bolton, A., Chen, Y., Lillicrap, T., Hui, F., Sifre, L., van den Driessche, G., Graepel, T. and
Hassabis, D. 2017, ‘Mastering the Game of Go Without Human Knowledge’. Nature 550:7676,
354–359.
Simondon, G. 2017 [1958]. On the Mode of Existence of Technical Objects. Translated by Cecile
Malaspina and John Rogove. Minneapolis: Univocal.
Simondon, G. 2014 [1961]. ‘Psychosociologie de la technicité’. In Sur La Technique (1953–1983).
Paris: Presses Universitaires de France, 27–30.
Simondon, G. 2015 [1965]. ‘Culture and Technics’. Radical Philosophy. Available online at https://
www.radicalphilosophy.com/article/culture-and-technics-1965 (accessed 6 September 2019).
Simondon, G. 2012 [1982]. ‘On Techno Aesthetics’. Translated by A. De Boever. Parrhesia 14, 1–8.
Simondon, G. 2012. ‘Technical Mentality’. Translated by A. De Boever. In A. De Boever, A. Murray, J.
Roffe and A. Woodward (eds), Gilbert Simondon. Edinburgh: Edinburgh University Press, 1–15.
State of Arizona, Executive Order. 2018. ‘Advancing Autonomous Vehicle Testing and Operating;
Prioritizing Public Safety’. Available online at https://azgovernor.gov/sites/default/files/related-
docs/eo2018-04_1.pdf (accessed 6 September 2019).
Wark, S. 2019. ‘The Subject of Circulation: on the Digital Subject’s Technical Individuations’.
Subjectivity 12:1, 65–81.
Whittaker, M., Crawford, K., Dobbe, R., Fried, G., Kaziunas, E., Mathur, V., Myers West, S.,
Richardson, R., Schultz, J. and Schwartz, O. 2018. AI Now Report. New York: AI Now
Institute/New York University.
Williams, R. 2017 [1983]. Culture and Society, 1780–1950. London: Penguin.

You might also like