Professional Documents
Culture Documents
Mercedes Bunz
To cite this article: Mercedes Bunz (2019) The calculation of meaning: on the misunderstanding
of new artificial intelligence as culture, Culture, Theory and Critique, 60:3-4, 264-278, DOI:
10.1080/14735784.2019.1667255
ABSTRACT
A wide range of different AI systems based on the promising
technology of machine learning has been implemented into
everyday life without further ado, at times supplanting and
delivering institutional decisions. Following Gilbert Simondon’s
analysis in On the Mode of Existence of Technical Objects, this essay
explores the contemporary technical objects of new Artificial
Intelligence systems to ask if their premature acceptance might
be based on a misunderstanding: AI systems are able to calculate
meaning, whereby they are performing a task traditionally rooted
in the sphere of culture. Are AI-informed technical objects,
because of their new ability of calculating meaning, mistakenly
being read as an ‘aesthetic object’ thereby creating the illusion of
being ‘integrated’ into the world? And how could their integration
be understood differently? The article contributes to studies
located at the intersection of work on Simondon and digital
technology, thereby traversing Science and Technology Studies
and Philosophy of Technology.
Critical thought seems to be easily cast aside when it comes to new technology develop-
ments in Artificial Intelligence such as machine learning. This is important to consider
as machine learning is a promising technology based on an alternative programming para-
digm: instead of an algorithm written by a programmer, the programmer now sets up a
framework that runs a data-analysis with the algorithm as an outcome. In other words,
the programming is done by a statistical model that analyses a large set of data without
explicit instructions but by finding patterns and inference in that data, and by categorising
it (LeCun et al. 2015: 436). Ground-breaking transformations to the set-up of such
machine learning frameworks in 2014 and 2015 led to advances in a wide range of
fields from speech recognition, visual object recognition, object detection to many other
domains such as drug discovery and genomics (LeCun et al. 2015). Soon after, the tech-
nology had been applied to quite diverse applications: from predictive text assistance in
smartphones to the identification of objects and faces for photos, the interpretation of
video-material linked to self-driving cars, the analysis of medical images for diagnosing
illnesses, the evaluation of performances recorded as data and in many more – a list
which shows that the technology of machine learning is being used in multiple sectors
and for a broad range of activities, some sensitive and others more playful (such as the
machine learning programme AlphaGo, which won the Google DeepMind Challenge
match by 4:1 against 18-time world champion Lee Sedol [Silver et al. 2017]).
The list of such a wide variety of use cases for this technology is one among many points
that provides rationale for why it is sensible to approach machine learning through the
framework of Gilbert Simondon’s technical thinking in general, and his book On the
Mode of Existence of Technical Objects in particular. One of Simondon’s concepts in
this book is to avoid thinking of a technical object as something fixed: ‘the individual tech-
nical object is not this or that thing’ (1958: 26). This point, that will be discussed in more
detail further below, is clearly linked to the technical object that is machine learning; it
shows for example in the wide variety when it comes to the applications of its functioning.
Simondon also refuted to approach a technology from its use cases remarking that ‘we
reduce technical objects when we think of them starting from the way they are being
used’ (1958: XI). Being a technology that delivers different aspects of ‘intelligence’ in
the same way that electric energy delivers different forms of ‘power’ makes it difficult to
describe the technical object that informs the new Artificial Intelligence that is machine
learning in one single example. Comparable to the application of electricity (see Simondon
1982: 5), the intelligence applied by new Artificial Intelligence is both promising and risky.
While fully acknowledging the promising potential of new Artificial Intelligence, the aim
of this essay is to zoom in on an interesting phenomenon that can be found on its risky
side: the tendency to apply new Artificial Intelligence earlier than other technologies to
the real world, at times missing the step by which new technologies usually get critically
assessed – that of lab testing.
This phenomenon, that new Artificial Intelligence has been employed promptly and
widely, has been noticed by several scholars such as Adrian Mackenzie, who remarks in
his comprehensive study of Machine Learners: ‘Known by various names – machine
learning, pattern recognition, knowledge discovery, data mining – the field and its
devices … seem to have quickly spread across scientific disciplines, business and com-
mercial settings, industry, engineering, media, entertainment and government’ (2017:
1). The quick spread of new Artificial Intelligence systems is also observed by the AI
Now Institute, an interdisciplinary research centre dedicated to understanding the
social implications of Artificial Intelligence: ‘From criminal justice to health care to edu-
cation and employment, we are seeing computational and predictive technologies
deployed into or supplanting private and governmental decision-making procedures
and processes’ (AI Now Institute 2018: 3). Such a deployment of new Artificial Intelli-
gence without further ado results in the technology not being tested thoroughly before-
hand, which is leading to cases of injustice often related to machine bias: there has been
extended critique, for example, on automated procedures of criminal risk assessment, in
which new AI is used for calculating the future criminality of inmates (Angwin et al.
2016; Oswald et al. 2018); on public teacher employment evaluations causing good tea-
chers to lose their jobs (O’Neil 2017: 3–11); and on millions of everyday discriminations
through search results that favour white people (Noble 2018). Here, one can find the ten-
dency that new Artificial Intelligence systems are being put to the test through their
direct implementation into everyday life. Such forms of testing ‘are explicitly designed
to implement experiments in social settings’, as Marres and Stark (2018) remarked in
the introduction to their workshop ‘Put it to the test: critical evaluations of testing’.
Regarding new AI, these forms of testing have been called by the AI Now Institute a
266 M. BUNZ
‘rampant testing of AI systems “in the wild” on human populations’, which they explain
as follows: ‘Silicon Valley is known for its “move fast and break things” mentality,
whereby companies are pushed to experiment with new technologies quickly and
without much regard for the impact of failures, including who bears the risk.’ (Whittaker
et al. 2018: 8).
It is this tendency of Western societies to accept such a real-life testing of new Artificial
Intelligence that guides this essay. A tendency that becomes most explicitly apparent in the
real-life testing of self-driving cars, which knowingly have cost several lives without being
shut down: the first fatal victim of a self-driving car accident in 2016 was the 40-year-old
driver Joshua Brown, who died in an underride collision while his Tesla car was running
on ‘autopilot’ mode. Tesla allows their customers to switch the autopilot programme,
which at the time of writing had been in beta-version-mode for five years ever since it
launched in October 2014. While customers using the autopilot mode serve Tesla as
non-trained test drivers on their own will and with the knowledge of several fatalities
caused by the programme, the decision to take part in those tests in real life has not
always been voluntary. Among the fatal accidents linked to self-driving cars is also the
49-year-old Elaine Herzberg, who was the first pedestrian victim to become run over by
an autonomous driving Uber while crossing a multilane street with her bike in March
2018. While the car that caused the accident by misidentifying Herzberg had even been
supervised by a driver, a few days before the accident the governor of Arizona announced
that fully autonomous cars without anyone behind the wheel are now allowed to operate
on public roads (State of Arizona 2018). As a reaction after those fatalities, both Tesla and
Uber started to regularly publish safety reports with Uber acknowledging shortcomings
that contributed to the fatal crash. Examples such as these show that there is indeed a
‘rampant testing of AI systems … on human populations’ (Whittaker et al. 2018: 8).
Thus, one can curiously ask: what drives this premature acceptance for new Artificial
Intelligence systems deployed without sufficient safety, i.e., without much critical
thought? Where is such an acceptance coming from, and how could one explain such a
phenomenon? Why are ethical considerations and critical thought so easily disregarded
when it comes to this type of technology?
To explore those questions, Gilbert Simondon is an excellent thinker to fall back upon
when trying to clarify how the relevance of critical thought came to be diminished in what
is deemed by some to become the epoch of Artificial Intelligence. His thinking has by now
been applied many times to digital technologies that process and analyse data as his
approach is helpful to introduce a more complex understanding to digital operations
beyond their aspect of automation. In the following, this article will show this by reviewing
approaches turning to Gilbert Simondon in order to get a more nuanced understanding of
digital technologies. The paper then turns in a second step to the actual technical process
of machine learning, by roughly laying out how it calculates meaning for a better under-
standing. The fact that machine learning can calculate meaning, an area which so far has
been mainly rooted in the sphere of culture, will then lead to a third step and the question,
if and in what way the new capability actually repositions the mode of the technical object
or if it imitates a repositioning. For this, the paper turns to Simondon’s precise analysis of
the technical object and the aesthetic object, to show in a final step that there is a misun-
derstanding that might contribute to the premature integration of new Artificial Intelli-
gence systems shown above.
CULTURE, THEORY AND CRITIQUE 267
Against this fabrication, the point of opening up the black box of digital technology has
been taken up by various scholars.1 On a theoretical level, Yuk Hui’s (2016) comprehen-
sive analysis of ‘digital objects’ aims for a more complex understanding of digital systems
as places of relation against the misunderstanding of the digital as just immaterial or auto-
mated. Simon Mills (2015) points in a similar direction when using Simondon’s notion of
‘information’ to critique typical claims of Big Data as ‘reality mining’, which fail to account
for the ‘relations social systems have with each other and the environment’ (Mills 2015:
71). And recently Scott Wark (2019) conceptualised the ‘digital subject’ (following from
Goriunova 2019) with Simondon as a ‘technical entity’ that individuates by circulating
data, again foregrounding the relation we have with the digital.
Other scholars are linking theory to digital practice: Henning Schmidgen’s analysis of
‘Simondon’s Politics of Technology’ (2012) criticises the ‘black-boxed interfaces’ of our
digital devices. Schmidgen ends his essay with a call to open up that black box and to
embrace an ‘understanding and shaping of the material culture of contemporary societies’
(30). A few years later, Coté and Pybus (2016) put his claim into practice: to facilitate this
understanding, they initiated a mobile phone workshop (including teenage users as well as
hackers) as a practice-led opportunity to rethink the contested relationship between the
human and technology. Their hackathon workshop was also used to study the data collec-
tion of mobile Android devices thereby creating a different understanding and relation-
ship between the mobile phone owner and its technology, while at the same time
exploring the workshop as a method.
Approaches like these aim for a repositioning of digital technology as a more open
entity and against the misunderstanding of digital technology as automation; a misunder-
standing that continues with the technical object that is new Artificial Intelligence, as we
will see. This time, however, one can find beside the imagination of automation also a pre-
mature implementation of it. With Simondon, this paper asks therefore: what aspect might
cause the prompt acceptance of new AI systems? To answer this, the paper will now turn
to the technology that informs new Artificial Intelligence systems mostly, i.e., machine
learning. If the mode of existence of the contemporary technical object has changed, it
will likely have changed because of this new technical development that calculates
meaning. So what is this new technology capable of? In what way is its mode of existence
different, or is it more that the different treatment was built on a misunderstanding?
1
For reason of time and brevity, this overview will be concentrated on English language publications in the knowledge that
there are many further publications linking the digital and technology in French, German, Italian and other languages.
268 M. BUNZ
133). If digital technology has started to enter the sphere of meaning, could it be that its pos-
ition is being shifted towards the sphere of culture? And might this be the reason for the
prompt acceptance of Artificial Intelligence systems into our world? To clarify this, one
needs to understand firstly the positioning of culture and technology in Western society,
a positioning of central concern to Simondon’s philosophy of technology that prominently
frames his study On the Mode of Existence of Technical Objects.
fact that this essay is interested in the misunderstanding of new Artificial Intelligence by
exploring if the calculation of meaning might have changed – or did it just obscure? – the
positioning of the technical object that is machine learning, the next section will focus
mainly on Simondon’s differentiation of the technical and the aesthetic object.
For Simondon, the aesthetic object is the opposite of being detached as it is integrated into
the world: ‘It is indeed integration that defines the aesthetic object, and not imitation’,
writes Simondon (195). By stressing the aspect of its integration, Simondon’s approach
towards the aesthetic object goes even further. Swapping integration with imitation, he
is not only refuting the classic aesthetic theory of mimesis but also relocates the
moment of beauty from the object to our encounter with it: ‘It is never the object strictly
speaking that is beautiful; it is the encounter’, he writes and repeats two pages later: ‘Real
aesthetic feeling cannot be enslaved to an object’ (202–204)
2
Bardin (2015: 167) reports that this section of the book has been the one ‘rarely taken into consideration by the critique’
even though it is supposedly the one Simondon was most attached to according to Simondon’s son (Bardin cites Hottois
1994: 118).
CULTURE, THEORY AND CRITIQUE 271
The integration of the artwork, however, is also not straightforward. For Simondon, an
aesthetic object is defined by two links: ‘The aesthetic work is … linked to the world and to
man’ and ‘also linked to other works’ ‘as a unique intermediate reality’ (200). These two
lines or anchors – being linked to the world and having links to other works of art – define
its characteristics as it is ‘characterized by the possibility of passage from one work to
another according to an essential analogical relation’ (200).
Here, we encounter a very different mode of existence of being in the world than the
one that could be found with the technical object. There is no violence, but instead ‘an
essential analogical relation’ – ‘analogy’ being for Simondon: ‘the foundation of the possi-
bility of going from one term to another without a negation of the term by the succeeding
one’ (200–201, emphasis added). This capacity of an analogy without a negation enables
the work of art to establish links with elements of the world, whereby it integrates itself
into world. Or in Simondon’s words: the ‘work of art re-establishes a reticular universe
at least for perception’ (192), and this ‘aesthetic universe is partial, integrated, and con-
tained in the real and actual universe’ (192, emphasis added). And with this point,
finally the particular force of the aesthetic object comes to the fore. Its specificity is to
be contained in the real and to remain integrated into the world, while at the same
time establishing a reticular universe through this integration through which the new
emerges: ‘Art is that through which a new reticulation emerges … and as a consequence
of this new reticulation there is the emergence of a real universe’ (204, emphasis added).
A point he already made ten pages earlier when saying about ‘aesthetic reality’ that it ‘is
a new mediation between man and the world, an intermediate world between man and
the world’ (194, emphasis added).
But let us take a step back and compare both positionings, that of the aesthetic object
with that of the technical object. Both introduce an aspect of intermediacy; they are both
characterised as being intermediate. The specificity with which they produce this interme-
diacy, however, remains fundamentally different. The aesthetic object is linked to an emer-
gence of a new reticulation, an intermediate world. It creates a reality as a new mediation
between man and the world through ‘an intermediate world’. The technical object is
located in a fundamentally different position: it is not an intermediate world, but an inter-
mediate tool, which ‘intervenes as a mediate between man and the world’. Other than the
aesthetic object, it is not linked to the world but is necessarily detached. Other than the
aesthetic object, it does not create but it intervenes. Thus, one can see that according to
Simondon’s definition, both objects are intermediates, while they show very different ten-
dencies as to how to implement their intermediate capacity.
Now that we have established the different positionings of the technical and aesthetic
object, we can return to our initial question: how is it that the force that characterises all tech-
nical objects – being detached and liberated from the world, a force which allows technology
to apply to it in a violent way and thereby to intervene between man and the world – often
seems disregarded when it comes to new Artificial Intelligence systems? Is the reason for the
‘rampant testing of AI systems “in the wild” on human populations’, as the AI Now Institute
called it, that the technical objects that use machine learning techniques and calculate
meaning are assumed to be of another character as other technical objects? To answer
this question, the next section will study if AI systems by calculating meaning are in fact
coming closer to the sphere in which we traditionally position meaning, i.e., closer to
culture. Could AI systems be rightly treated as an aesthetic object?
272 M. BUNZ
apparent, when one looks at recent research that has experimented with cases in which
those systems and their identifications have been misguided (Geirhos et al. 2018).
When studying different contemporary system architectures such as AlexNet, VGG,
GoogleNet, or ResNet50 trained on a large dataset ‘Stylised ImageNet’, computer scientists
from the University of Tübingen (Geirhos et al. 2018) realised that those models could be
easily confused. The reason: the current analysis of images by new AI systems is heavily
biased towards texture. When creating images with a texture-shape cue – such as a cat
shape with an elephant skin texture instead of fur – the identification failed. This shows
how systems identify: they approach an image via its texture, which means they struggle
to identify more abstract, larger shapes. A cat with an elephant texture was an elephant for
the AI programme, while it still was a cat to humans (Geirhos et al. 2018: 2). This exper-
iment again confirms that the calculation of meaning we find with new AI remains fun-
damentally different from an understanding of meaning.
With Simondon one could say that those models are still ‘detached from the world’,
because the meaning they calculate remains linked to statistical models and not linked
to the meaning of the world. In other words, AI informed technical objects can now cal-
culate meaning and this technical evolution has opened a door that has so far been closed;
still, this does not mean that these objects are integrated in the same manner as an aes-
thetic object integrating the world around – they remain technical. What we are facing
is therefore a case of a mistaken identity: new AI systems are imitating the understanding
of meaning by calculating it, but they are not understanding – they lack the ability to link
their classifications in an integrated way to a wider, constantly shifting context. Their cal-
culations are not integrated into the world but remain detached.
Interestingly, this mistaken identity of technical objects is nothing new for a technical
object from a Simondonian perspective – the phenomenon has already described in On the
Mode of Existence of Technical Objects: while in principle, there can be a transition
between the technical and the aesthetic object – technical objects ‘have an aesthetic
value’ (196) when they are ‘integrated into the natural or human world’ (199) – in practice
the aesthetic object is often ‘enveloping and masking the technical object’. This mask does
not find Simondon’s approval: ‘Every disguise of a technical object generally produces the
uncomfortable impression of a fake and appears like a materialized lie’ (196). Exactly this
aspect is taken up again in ‘Psychosociologie de la technicité’ (1961: 37) when Simondon
describes ‘the obligation of the technical object to wear a veil or a disguise to penetrate the
cathedral of culture’. This is exceptionally the case for what he calls crypto-technical
objects, objects that simulate to be something they are not such as a fake fireplace that
only simulates a flickering fire. The other type of technical object is described by Simon-
don as a phanero-technical object, which does the opposite: aesthetically, it is organised
around a technical element such as the tractor around its strong engine or the sport car
exhibiting proudly its exhaust (38–39). The case of new AI misunderstood as an aesthetic
object can be related to these observations, although new AI draws on other aspects of aes-
thetic objects.
While Simondon described the disguise of the technical object to enter the ‘cathedral of
culture’ (1961: 37) as a material one, i.e., as a technical object that bowed to aesthetic stan-
dards that had no technical function, new Artificial Intelligence is taking a different route.
What it delivers seems to imitate an understanding of our world, a task that so far has been
exclusively linked to aesthetic objects which produce meaning by being integrated into our
274 M. BUNZ
world. Imitating their understanding through calculation is the reason why the technical
object that is new Artificial Intelligence has been mistakenly regarded as also being inte-
grated into our world. In a similar way to the crypto-technical object it simulates an aes-
thetic function it does not have: an understanding of the world that rises from being
integrated.
are linked to the identification of meaning, they could not arrive at that meaning in a more
different way. Still, the imitation of understanding by the technical object that is machine
learning seems to lead to the technical object to being welcomed as if it is integrated in to
our world: it is mistaken for a cultural, aesthetic object instead of being a detached one that
turns to the world in an abstract and violent way. Taking on a Simondonian perspective
makes their confusion understandable, and with it the premature integration of new AI
systems.3 Turning to Simondon, the inquiry could show that the technical and the aes-
thetic object are sharing the same aspect of being intermediate. However, the aesthetic
object and its intermediacy operates by being deeply integrated into the world; the tech-
nical object, on the other hand, is detached from the world; it is an intermediate tool.4 The
intermediate tool of new AI has the capability of calculating meaning, but it remains a tool.
This tool allows an imitation of understanding, but not an actual ‘understanding’ – its
meaning remains calculated. To not acknowledge this means that Artificial Intelligence
systems are being misunderstood as ‘understanding’ and ‘integrated’, and it is this
which allows them to be implemented prematurely.
Understanding the calculation of meaning by new Artificial Intelligence as a technical
object would question its integration thereby bringing to the fore its exceptional capability
and with it an aspect much more interesting: that its ‘intelligence’ does not function in the
same way as the one we know from living beings. What is their specific technical logic?
How does this other way of being intelligent work?
Instead of curiously asking such a question, the contemporary approach is to cover up
their ‘other’ intelligence in black box systems as the focus is on the capability of new Artifi-
cial Intelligence for ‘automation’. Here, a Simondonian reading of new Artificial Intelli-
gence systems in future research could help to shift this. Approaching new Artificial
Intelligence as a technical object allows a much more open approach – and for Simondon
the openness, the ‘margin of indeterminacy’ has been deeply linked to ‘a progressive per-
fecting of machines’ (1958: 17). Future work could use Simondon to develop a thinking
that aims to opening up the black box (Schmidgen 2012) of new Artificial Intelligence
to bring out existing and new relations (Mills 2015; Coté and Pybus 2016). Instead of a
black box, there is certainly room for more complex interfaces than the final decisions
most new AI interfaces have currently on offer. Interestingly, Simondon (1958: 135)
himself also already touches on the topic of Big Data, when discussing the collaboration
of technical and human memory regarding magnetic tape recordings that are ‘capable
of retaining monomorphic documents that are very complex, richly detailed, and
precise for a very long time’; remarks that focus on the very specific skill technical
digital memories have on offer. Understanding the specific skill of the statistical intelli-
gence that informs machine learning would then lead away from debates that feart the
3
One could consider if such a premature integration is not generally an aspect typical for digital technology and therefore
an effect on a larger scale. In the text ‘Technical Mentality’ (1961), Simondon explores the genesis of technical culture by
describing different technical stages such as the artisanal modality, the industrial modality and the network modality,
each following a re-constellation of two ‘sources’: ‘energy’ and ‘information’ (5). Analysing these aspects could help to
understand further why and how the ‘technical reality’ becomes often misunderstood as actual reality, which Brian
Massumi also notices when talking about a ‘shift toward a world integrally reshaped – culturally, socially, and economi-
cally – by digital technologies’ (Massumi 2012).
4
There are cases in which new Artificial Intelligence is being used to create aesthetic objects such as the neural network-
generated nude portraits, which artist-programmer Robbie Barrat calculated using progressive growing of Generative
Adversarial Networks.
276 M. BUNZ
loss of a human skill through its automation and towards the collaboration between an
intelligent machine with a differently intelligent human. How can those two different
ways of being intelligent work best together? Of what new ways of being intelligent are
they together capable? Questions like these could help answer the call of some experts
to link human autonomy more strongly to artificial autonomy (Floridi and Cowls
2019). And last but not least, it would be a way of moving beyond the current
misunderstanding.
Acknowledgements
I owe many thanks for this essay, starting with Conor Heaney’s and Iain Mackenzie’s invitation to
the conference Culture & Technics: The Politics of Simondon’s Du Mode at the Centre for Critical
Thought, University of Kent. The piece has also been thoroughly informed by the work and con-
versation with Noortje Marres, and her inspiring workshop ‘Put it to the test’ (together with David
Stark in London, December 2018), a workshop that has been tremendously important for this text.
Many thanks to the reviewers profound but also constructive feedback that helped shaping this
essay, and to the careful and thorough edits and suggestions by the journal’s editor Christopher
C. Barnes. And last but not least to my Macbook for bearing with me and all the PDFs it had to
open and process.
Disclosure statement
No potential conflict of interest was reported by the author.
Notes on contributor
Mercedes Bunz is Senior Lecturer in Digital Society at the Department of Digital Humanities, King’s
College London. Her last books are: The Internet of Things (Polity 2018) written with Graham
Meikle; and the open access publication Communication (University of Minnesota Press/meson
press 2019), which discusses how machine communication has changed the notion of communi-
cation written with Finn Brunton and Paula Bialski. She is a member of the international and inter-
disciplinary Research Network for the Critical Humanities, Terra Critica.
ORCID
Mercedes Bunz http://orcid.org/0000-0003-2876-0522
References
AI Now Institute. 2018. ‘Litigating Algorithms: Challenging Government Use of Algorithmic
Decision Systems’ [report]. New York: AI Now Institute. Available online at https://
ainowinstitute.org/litigatingalgorithms.pdf (accessed 6 September 2019).
Angwin, J., Larson, J., Mattu, S., & Kirchner, L. 2016, May 23. ‘Machine bias’. ProPublica. Available
online at https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-
sentencing (accessed 6 September 2019).
Bardin, A. 2015. Epistemology and Political Philosophy in Gilbert Simondon: Individuation,
Technics, Social Systems. New York: Springer.
Barthélémy, J.-H. 2011. ‘Quel mode d’unité pour l’œuvre de Simondon?’. Cahiers Simondon 3, 131–
148.
CULTURE, THEORY AND CRITIQUE 277
Bunz, M. 2014. The Silent Revolution: How Digitalization Transforms Knowledge, Work,
Journalism, and Politics Without Making Too Much Noise. Basingstoke: Palgrave.
Bunz, M. and Meikle, G. 2018. The Internet of Things. Cambridge: Polity.
Combes, M. 2013. Gilbert Simondon and the Philosophy of the Transindividual. Cambridge, MA:
MIT Press.
Coté, M. and Pybus, J. 2016. ‘Simondon on Datafication. A Techno-Cultural Method.’ Digital
Culture & Society 2:2, 75–92.
Chun, W. 2018. ‘Queerying Homophily’. In C. Apprich, F. Cramer, W. Chun and H. Steyerl (eds),
Pattern Discrimination. Meson press and University of Minnesota Press, 59–97. Available online
at https://meson.press/wp-content/uploads/2018/11/9783957961457-Pattern-Discrimination.
pdf (accessed 6 September 2019).
Floridi, L. and Cowls, J. 2019. ‘A Unified Framework of Five Principles for AI in Society’. Harvard
Data Science Review. Available online at https://doi.org/10.1162/99608f92.8cd550d1 (accessed 6
September 2019).
Geirhos, R., Rubisch, P., Michaelis, C., Bethge, M., Wichmann, F. A., and Brendel, W. 2018.
‘ImageNet-Trained CNNs are Biased Towards Texture; Increasing Shape Bias Improves
Accuracy and Robustness’. Available online at https://arxiv.org/abs/1811.12231 (accessed 6
September 2019).
Géron, A. 2017. Hands-on Machine Learning with Scikit-Learn and TensorFlow. Sebastopol, CA:
O’Reilly Media.
Goodfellow, I., Bengio, Y. and Courville, A. 2017. Deep Learning. Cambridge, MA: MIT Press.
Goriunova, O. 2019. ‘The Digital Subject: People as Data as Persons’. Theory, Culture & Society.
Available online at https://doi.org/10.1177/0263276419840409 (accessed 6 September 2019).
Hall, S. 1980. ‘Encoding/Decoding’. In S. Hall, D. Hobson, A. Lowe and P. Willis (eds), Culture,
Media, Language: Working Papers in Cultural Studies, 1972–1979. London: Hutchinson, 128–138.
Hottois, G. 1994. ‘Gilbert Simondon entre les Interfaces Technique et symbolique’. In F. Tinland
(ed), Ordre Biologique, Ordre Technologique. Seyssel: Champ Vallon, 72–95.
Hui, Y. 2016. On the Existence of Digital Objects. Minneapolis: University of Minnesota Press.
Karpathy, A. 2014. ‘What I Learned From Competing Against a ConvNet on ImageNet’. Andrej
Karpathy blog. Available online at http://karpathy.github.io/2014/09/02/what-i-learned-from-
competing-against-a-convnet-on-imagenet/ (accessed 6 September 2019).
LeCun, Y., Bengio, Y. and Hinton, G. 2015. ‘Deep Learning’. Nature 521:7553, 436–444. Available
online at https://www.nature.com/articles/nature14539 (accessed 6 September 2019).
Mackenzie, A. 2017. Machine Learners: Archeology of a Critical Data Practice. Cambridge, MA:
MIT Press.
Marres, N. and Stark, D. 2018. ‘Put It To the Test: Critical Evaluations of Testing’ [written work-
shop introduction].
Massumi, B. 2012. ‘‘Technical Mentality’ Revisited: Brian Massumi on Gilbert Simondon’. In A. De
Boever, A. Murray, J. Roffe and A. Woodward (eds), Gilbert Simondon: Being and Technology.
Edinburgh: Edinburgh University Press, 19–36.
McArthur, T. and Lam-McArthur, J. 2018. ‘Meaning’. In The Oxford Companion to the English
Language. Oxford: OUP. Available online at http://www.oxfordreference.com/view/10.1093/
acref/9780199661282.001.0001/acref-9780199661282-e-770 (accessed 6 September 2019).
Mills, S. 2015. ‘Simondon and Big Data’. Platform: Journal of Media and Communication 6, 59–72.
Noble, S. U. 2018. Algorithms of Oppression: How Search Engines Reinforce Racism. New York: NYU
Press.
O’Neil, C. 2017. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens
Democracy. New York: Broadway Books.
Oswald, M., Grace, J., Urwin, S. and Barnes, G. C. 2018. Algorithmic Risk Assessment Policing
Models: Lessons from the Durham HART Model and ‘Experimental’ Proportionality.
Information & Communications Technology Law 27:2, 223–250.
Schmidgen, H. 2012. ‘Inside the Black Box: Simondon’s Politics of Technology’. SubStance 41:3, 16–31.
Shanmugamani, R. 2018. Deep Learning for Computer Vision. Birmingham: O’Reilly and Packt
Publishing.
278 M. BUNZ
Silver, D., Schrittwieser, J., Simonyan, K., Antonoglou, I., Huang, A., Guez, A., Hubert, T., Baker, L.,
Lai, M., Bolton, A., Chen, Y., Lillicrap, T., Hui, F., Sifre, L., van den Driessche, G., Graepel, T. and
Hassabis, D. 2017, ‘Mastering the Game of Go Without Human Knowledge’. Nature 550:7676,
354–359.
Simondon, G. 2017 [1958]. On the Mode of Existence of Technical Objects. Translated by Cecile
Malaspina and John Rogove. Minneapolis: Univocal.
Simondon, G. 2014 [1961]. ‘Psychosociologie de la technicité’. In Sur La Technique (1953–1983).
Paris: Presses Universitaires de France, 27–30.
Simondon, G. 2015 [1965]. ‘Culture and Technics’. Radical Philosophy. Available online at https://
www.radicalphilosophy.com/article/culture-and-technics-1965 (accessed 6 September 2019).
Simondon, G. 2012 [1982]. ‘On Techno Aesthetics’. Translated by A. De Boever. Parrhesia 14, 1–8.
Simondon, G. 2012. ‘Technical Mentality’. Translated by A. De Boever. In A. De Boever, A. Murray, J.
Roffe and A. Woodward (eds), Gilbert Simondon. Edinburgh: Edinburgh University Press, 1–15.
State of Arizona, Executive Order. 2018. ‘Advancing Autonomous Vehicle Testing and Operating;
Prioritizing Public Safety’. Available online at https://azgovernor.gov/sites/default/files/related-
docs/eo2018-04_1.pdf (accessed 6 September 2019).
Wark, S. 2019. ‘The Subject of Circulation: on the Digital Subject’s Technical Individuations’.
Subjectivity 12:1, 65–81.
Whittaker, M., Crawford, K., Dobbe, R., Fried, G., Kaziunas, E., Mathur, V., Myers West, S.,
Richardson, R., Schultz, J. and Schwartz, O. 2018. AI Now Report. New York: AI Now
Institute/New York University.
Williams, R. 2017 [1983]. Culture and Society, 1780–1950. London: Penguin.