You are on page 1of 73

Biopolitical Simulations: Governing Life in FuturICT

Submitted by Helge Peters (33206931) in partial requirement for the degree of MA Media and Communications in the Programme in Contemporary Cultural Processes, Goldsmiths College, University of London August 2012


 
 
 
 
 
 
 
 
 
 
 
 
 Helge Peters (student no. 33206931) Course code: MC71044A Course title: MA Media and Communications (theory) dissertation Supervisor: Dr Sarah Kember Date: 31 August 2012 14991 words

Abstract
FuturICT
is
a
current
proposal
for
an
EU‐funded
big
science
project
which
seeks
 to
leverage
data
from
the
internet
in
order
to
create
predictive
simulations
of
 global
social
and
economic
systems.
Utilising
theoretical
frameworks
and
 simulation
techniques
informed
by
complexity
science,
the
project
promises
to
 develop
new
forms
of
governance
in
a
complex
world.
Drawing
on
Michel
 Foucault’s
analysis
of
biopower,
this
dissertation
examines
how
and
with
which
 consequences
life
figures
as
an
object
of
knowledge
and
a
governable
domain
in
 FuturICT.

 Whereas
the
knowledge
of
life
which
both
Michel
Foucault
and
Georges
 Canguilhem
describe
for
the
19th
century
stressed
themes
of
organic
 homeostasis,
contemporary
complexity
theories
and
the
discourse
of
artificial
 life
describe
life
as
a
process
of
emergence
and
self‐organisation
abstracted
from

 its
materiality.
It
will
be
argued
that
the
simulations
of
emergent
social
 structures,
and
the
techniques
for
guiding
self‐organisation
which
FuturICT
 derives
from
biomimetic
research,
mark
the
entry
of
a
specific
contemporary
 knowledge
of
life
into
the
government
of
living
beings.

 Drawing
on
feminist
and
other
critical
analyses
of
technoscience,
the
knowledge‐ making
practice
of
FuturICT
is
examined
as
resting
on
a
problematic
concept
of
 objectivity
and
enacting
a
naturalisation
of
social
relations.
Insisting
instead
on
 life’s
specificity
as
an
object
of
knowledge
might
yield
ways
for
resisting
the
 instrumentalisation
of
life
for
projects
of
domination.


2


Table of Contents
1. Introduction 2. Literature review: the social and the vital 2.1. Biopower at the threshold of modernity 2.2. Knowing life 2.3. Regulating life 2.4. Cyborg life 2.5. Vital complexity 3. Methodology 4. Case study: governing life in FuturICT 4.1. FuturICT, big data and the technological zone 4.2. “Simulating life on Earth and everything it relates to“ 4.3. Simulation, situatedness, transformation 4.4. Imag(in)ing global life 4.5. Biomimetic government 5. Conclusion: biopolitical simulations Bibliography 35 39 47 52 57 63 68 9 12 17 22 25 32 5

3


Table of Figures
Fig.1 Video still of revolving globe enveloped by data Fig.2 The whole earth as interface 52 53

4


1. Introduction
One
of
the
most
peculiar
characteristics
of
contemporary
society
might
be
the
 convergence
of
media,
technology,
and
life.
As
Sarah
Kember
suggests,
 considering
the
"technologisation
of
biology
from
mechanical
metaphors
of
the
 heart
as
pump
to
the
genetic
informationalisation
of
life
itself"
and
the
 "biologisation
of
technology
from
cybernetics
to
artificial
intelligence
to
artificial
 life"
(Kember,
2006:
235),
the
processes
of
exchange
between
media,
technology
 and
the
life
sciences
seem
to
run
both
ways.
At
the
same
time,
the
increasing
 involvement
of
networked
digital
media
in
social
processes
yields
new
forms
of
 social
measure.
In
a
sense,
digital
media
have
transformed
the
act
of
taking
the
 vital
signs
of
an
aggregate
of
living
beings:
if
the
addressee
of
the
question,
“How
 are
we
feeling
today?“
is
a
whole
population,
the
answer
might
be
found
by
 mining
Twitter
data
using
language‐processing
algorithms
that
measure
the
 affective
connotation
of
status
messages
(Mislove
et
al.,
2010).
 Drawing
on
Michel
Foucault’s
concept
of
biopower
as
a
modality
of
power
 predicated
on
life,
I
will
ask
in
this
dissertation
if
there
exists
a
specific
 biopolitics
pertaining
to
this
scenario
of
biologised
media,
mediated
biology,
and
 digitised
population.
I
will
do
so
by
investigating
FuturICT,
a
recent
proposal
for
 an
EU‐funded
big
science
project
which
promises
to
leverage
data
from
the
 Internet
in
order
to
“simulate
life
on
Earth
and
everything
it
relates
to“
 (Helbing/Balietti,
2011a:
76).
Thus,
the
question
I
will
address
is
how,
and
with
 which
consequences,
life
becomes
an
object
of
knowledge
and
a
governable
 domain
within
FuturICT.
This
involves
questions
regarding
the
way
life
is
 rendered
computable
and
to
what
extent
it
is
thus
instrumentalised;
how
 
 5


boundaries
between
nature
and
society
are
redrawn
within
the
project
proposal;
 to
what
extent
the
naturalisation
of
an
object
of
knowledge
resonates
with
a
 progress
narrative
promising
mastery
over
nature
through
objective
knowledge;
 and
what
political
consequences
might
follow
from
this
effort
to
simulate
life
on
 Earth.

 Life,
however,
is
not
an
unproblematic
concept;
neither
is
biopower.
In
the
 following
chapter,
I
will
try
to
arrive
at
an
understanding
of
biopower
which
 registers
that
this
modality
of
power
pertains
not
only
to
the
government
of
 living
beings,
but
also
incorporates
a
specific
knowledge
of
biological
life
into
a
 general
economy
of
power.
In
this
sense,
biopower
denotes
a
certain
becoming‐ vital
of
the
social.
The
underlying
concept
of
life
informing
the
notion
of
 biopower
will
be
traced
through
readings
of
both
Georges
Canguilhem
and
 Michel
Foucault.
However,
if
life
is
a
historically
contingent
product
of
biological
 discourse,
to
what
extent
need
we
reconsider
our
understanding
of
biopower
in
 light
of
recent
transformations
of
the
discourse
on
life?
I
will
attempt
a
 provisional
answer
by
reconstructing
the
influence
of
cybernetics
and
 complexity
theories
on
the
reconceptualisation
of
vital,
social,
and
technological
 systems
as
essentially
analogous
complex,
if
not
living,
entities.

 Investigating
the
mutual
support
of
systems
of
knowledge
and
technologies
of
 power,
my
methodology
(chapter
three)
is
informed
by
readings
of
Foucault’s
 notion
of
critique
as
well
as
Donna
Haraway’s
understanding
of
the
 technosciences
as
a
cultural
practice
which
focuses
attention
on
the
power
 dimension
that
is
at
stake
in
the
construction
of
boundaries
between
nature
and
 culture.



6


In
chapter
four,
I
will
examine
the
ways
in
which
life
is
at
stake
in
the
domain
of
 knowledge
and
power
that
FuturICT
circumscribes.
Life
figures
threefold
in
 FuturICT:
as
the
domain
to
be
simulated
and
governed,
as
informing
the
specific
 methodology
of
simulation
itself,
and
as
a
resource
for
biomimetic
technologies
 for
governing
naturalised
social
systems.
Situating
FuturICT
in
an
ongoing
 debate
about
the
emergence
of
big
data
technologies,
I
will
proceed
to
examine
 how
the
simulation
of
emergent
social
structures
proposed
in
FuturICT
 intersects
with
artificial
life
research,
and
discuss
shared
strategies
for
 naturalising
an
object
of
knowledge
and
thus
authorising
the
production
of
 matters
of
fact.
Drawing
on
Donna
Haraway’s
call
for
situated
knowledges,
I
will
 then
examine
how
these
strategies
of
objective
knowledge‐making
resonate
with
 a
fantasy
of
omniscience
permeating
the
technosciences,
and
register
critical
 voices
within
complexity
discourse
which
might
problematise
the
underlying
 epistemology
of
FuturICT.
The
initiators
of
the
FuturICT
proposal
underscore
the
 project’s
vital
necessity
by
imagining
a
world
out
of
control,
which
needs
to
be
 mastered
by
science
and
technology.
Analysing
the
visual
and
textual
narrative
 of
FuturICT,
I
will
argue
that
a
progress
narrative
tapping
into
a
gendered
 imaginary
of
nature
is
strategically
deployed
in
order
to
secure
the
acceptance
of
 technical
solutions
for
social
and
political
problems.
These
solutions
register
 social
systems
as
complex,
if
not
living
entities
which
are
to
be
governed
by
 applying
mechanisms
authorised
by
biological
discourses.
However,
I
will
argue
 that
naturalising
the
social
as
a
complex,
living
system
narrows
the
space
for
 political
contestation
and
might
serve
to
privilege
the
given
order
over
projects
 of
transformation.



7


Concluding
in
chapter
five
with
a
discussion
of
the
continuities
and
differences
of

 knowledges
of
life
and
the
biopolitical
techniques
underlying
the
specific
 articulation
of
biopower
in
FuturICT,
I
will
draw
attention
to
the
ways
in
which
 life’s
specificity
as
an
object
of
knowledge
might
help
to
resist
totalising
 epistemologies
and
enable
ethicopolitical
projects
of
transformation,
instead
of
 instrumentalising
knowledge
of
life
for
the
government
of
living
beings.

8


2. Literature review: the social and the vital
2.1 Biopower at the threshold of modernity
Foucault
explicitly
mentions
the
concept
of
biopower
in
the
concluding
lecture
of
 his
lecture
series
Society
Must
Be
Defended
at
the
Collège
de
France
from
17
 March
1976
(Foucault,
2004)
and
in
the
last
chapter
of
The
History
of
Sexuality
 Vol.I
from
1976
(Foucault,
2008).
Later
on,
Foucault
would
develop
the
concept
 of
biopower
further
in
his
lectures
on
Security,
Territory,
Population
and
The
 Birth
of
Biopolitics
from
1977
to
1978
(Foucault,
2007;
2008a).
All
of
these
 analyses
bring
one
specific
period
of
time
to
the
fore;
or
rather,
the
transition
 between
two
periods.
Both
the
formation
of
the
sexuality
dispositif
and
the
 rationalities
and
discourse
of
modern
governmentality
happened
at
the
 threshold
between
what
Foucault
termed
the
classical
age
and
the
modern
age
in
 Europe,
between
the
18th
and
the
19th
century.
This
period
witnessed
decisive
 changes
in
the
epistemic
order,
leading
to
the
formation
of
new
knowledges
and
 methods
of
enquiry,
a
theme
Foucault
explored
in
his
earlier
work
The
Order
of
 Things
(Foucault,
2002).
What
is
more,
between
the
18th
and
19th
century
a
 different
configuration
of
rationalities,
techniques
and
institutions
of
power
 began
to
emerge
in
conjunction
with
new
ways
of
knowing,
a
modality
of
power
 which
Foucault
would
call
biopower.
 Foucault
distinguishes
between
three
modalities
of
power
which
are
each
 characterised
by
specific
rationalities,
techniques,
and
objects.
However,
this
is
 not
to
say
that
these
modalities
constitute
different
historical
periods
or
displace
 each
other.
Rather
than
that,
they
characterise
a
prevailing
rationality
which
 organises
the
relation
between
different
techniques
and
referents
of
power
 
 9


(Foucault,
2007:
8).
The
sovereign
power
of
the
pre‐classical
age
is
characterised
 as
a
“right
to
seizure“
(Foucault,
2008:
136)
centered
around
the
figure
of
the
 sovereign
ruler.
However,
at
the
threshold
between
the
classical
age
and
the
 modern
age,
the
formation
of
a
new
regime
of
power
takes
place
that
becomes
 productive
rather
than
subtractive,
a
power

 


to
incite,
reinforce,
control,
monitor,
optimise,
and
organise
the
forces
under
it:
a
 power
bent
on
generating
forces,
making
them
grow
and
ordering
them,
rather
 than
one
dedicated
to
impeding
them,
making
them
submit,
or
destroying
them
 (Foucault,
2008:
136).


Consequently,
the
19th
century
sees
the
development
of
a
moral
discourse
on
 poverty
and
on
health,
and
at
the
same
time
the
development
of
disciplinary
 institutions
such
as
the
school
and
the
prison.
Within
these
institutions
and
their
 discourses,
an
“anatomo‐politics“
(Foucault,
2008:
139)
emerges
that
is
directed
 at
the
individual
human
body,
disciplining
its
unruly
forces
and
thereby
 producing
the
docile
body,
trained
to
be
productively
integrated
into
the
 machine‐like
institutions
of
factory
and
army
(Foucault,
1991).
However,
from
 the
19th
century
onwards,
disciplinary
power
is
gradually
supplemented
by
a
 new
modality
of
power
that
operates
according
to
a
different
rationality,
namely
 biopower.
The
prescription
of
a
rigid
normative
ideal
coded
within
the
forbidden
 and
the
obligatory
typical
of
disciplinary
power
is
displaced
by
an
orientation
 towards
describing
and
influencing
empirically
occurring
regularities:


10


Discipline
works
in
an
empty,
artificial
space
that
is
to
be
completely
 constructed.
Security1
will
rely
on
a
number
of
material
givens
...
It
is
simply
a
 matter
of
maximising
the
positive
elements,
for
which
one
provides
the
best
 possible
circulation,
and
of
minimising
what
is
risky
and
inconvenient
(Foucault,
 2007:
19)



Moreover,
contrary
to
disciplinary
power,
which
is
directed
at
the
individual
 body,
biopower
is
concerned
with
governing
phenomena
observable
on
an
 aggregate
level.
Ian
Hacking
notes
how
the
early
19th
century
witnessed
an
 “avalanche
of
printed
numbers“
(Hacking,
1982:
281),
that
is,
a
veritable
 inflation
of
practices
of
measuring,
counting
and
numerically
recording
social
 phenomena.
Hence,
new
forms
of
social
measure
gave
rise
to
the
norm
as
an
 empirically
given
distribution
of
events
which
allows
for
some
deviation
within
 parameters
describable
in
terms
of
probability.
No
longer
was
the
causality
of
 events
such
as
scarcity,
crime,
and
sickness
explained
cosmologically
as
 determined
by
fortune,
God’s
will,
or
Man’s
evil
nature,
but
rather
inserted
into
a
 space
of
probability
governed
by
quasi‐natural
laws
(Foucault,
2007:
47).
 Henceforth,
biopower
strives
to
govern
“aleatory
events
that
occur
within
a
 population
that
exists
over
a
period
of
time“
(Foucault,
2004:
246).
The
then‐ new
notion
of
the
population
came
to
guide
governmental
intervention,
which
 employed
techniques
of
statistical
measurement
to
gain
knowledge
of
“births
 and
mortality,
the
level
of
health,
life
expectancy
and
longevity,
with
all
the
 conditions
that
can
cause
these
to
vary“
on
the
scale
of
the
population
(Foucault,
 2008:
139).
This
orientation
towards
a
regulation
of
“biological
or
 























































 1
Security,
here,
stands
metonymically
for
biopower
because
the
security
apparatus,
as

will
be
explained
later,
is
one
of
its
defining
mechanisms.



11


biosociological
processes
characteristic
of
human
masses“
(Foucault,
2004:
250)
 entailed
a
specific
knowledge
of
life,
from
which
the
notion
of
the
population
and
 its
characteristic
dynamics
could
emerge.
It
is
precisely
at
this
point
of
the
“entry
 of
life
into
history,
that
is,
the
entry
of
phenomena
peculiar
to
the
life
of
the
 human
species
into
the
order
of
knowledge
and
power“
(Foucault,
2008:
142)
 that
Foucault
situates
the
“threshold
of
modernity“
(Foucault,
2008:
143)
and
the
 beginning
of
biopower.
 


2.2 Knowing life
What
is
at
stake
in
biopower
is
not
exclusively
bodily
life
but
a
notion
of
life
 which
is
inextricably
entangled
with
a
scientific
understanding
of
life
that
 emerged
with
the
modern
discipline
of
biology.2
In
The
Order
of
Things,
Foucault
 (2002)
examines
a
rupture
in
the
epistemic
order
between
the
classical
and
the
 modern
age.
Contrary
to
a
smooth
progression
of
knowledge,
Foucault
observes
 a
rather
sudden
change
in
the
conditions
of
knowing
between
the
18th
and
19th
 century
which
lead
to
the
development
of
the
human
sciences
and
the
human
 























































 2
The
concept
of
biopower
has
led
to
a
variety
of
productive
interpretations.
For

instance,
Antonio
Negri
and
Michael
Hardt
(2000)
interpret
biopower
in
light
of
an
 autonomist
Marxist
theoretisation
of
productive
capacities
disseminated
within
a
 population
and
actualised
as
immaterial
labour
in
post‐Fordist
capitalist
relations
of
 production.
Giorgio
Agamben
(1998)
argues
that
the
exclusion
of
bare
life,
which
can
be
 killed,
from
the
qualified
life
of
the
citizenry
is
the
founding
gesture
of
sovereign
power.
 Agamben’s
concept
of
bare
life
has
been
criticised
as
transhistorical,
neglecting
the
 historical
contingency
of
knowledges
of
life
which
Foucault
emphasised
(Muhle,
2007:
 41‐42).
For
an
overview
and
discussion
of
these
and
other
approaches
to
biopower
and
 biopolitics
see
Lemke
(2011).
However,
the
focus
of
this
thesis
is
on
life
as
a
historically
 contingent
referent
of
knowledge
and
power.




12


subject
as
the
center
of
knowledge.
Specifically,
Foucault
observes
parallel
 developments
in
linguistics,
economics,
and
biology
which
obtained
new
objects
 of
enquiry
–
language,
labour,
life
–
and
were
formed
as
scientific
disciplines
in
 the
modern
sense.
The
latter
discipline
is
of
immediate
interest
here.
In
the
 classical
age,
Foucault
argues,
biology
as
the
science
of
life
and
living
beings
did
 not
exist,
nor
did
the
notion
of
biological
life:

All
that
existed
was
living
beings,
which
were
viewed
through
a
grid
of
 knowledge
constituted
by
natural
history
(Foucault,
2002:
139).


The
natural
historians
of
the
classical
age
were
preoccupied
with
the
 classification
and
taxonomy
of
living
beings,
mainly
in
the
field
of
botany.
Natural
 history
organised
a
field
of
visibility
and
sought
to
close
the
gap
between
what
is
 systematically
observable
and
describable
by
an
orientation
towards
the
visible
 structure
of
natural
beings
(Foucault,
2002:
144).
By
means
of
description
of
a
 set
of
visible
differences,
natural
history
sought
to
inscribe
the
multiplicity
of
 living
beings
into
a
universal
grid
organised
by
differences
in
structure
 (Foucault,
2002:
149).

 However,
subsequently
an
orientation
towards
the
organism
and
its
functions
in
 relation
to
the
milieu
enabled
the
notion
of
life
to
emerge.
Organic
structure
was
 no
longer
regarded
in
terms
of
frequency
and
similitude
of
appearance
as
a
 means
of
organising
a
grid
of
differences,
but
came
to
be
explained
according
to
 its
function
to
perpetuate
the
organism’s
existence:

The
analysis
of
organisms,
and
the
possibility
of
resemblances
and
distinctions
 between
them,
presupposes,
therefore,
a
table,
not
of
the
elements,
which
may
 vary
from
species
to
species,
but
of
the
functions,
which,
in
living
beings
in


13


general,
govern,
complement,
and
order
one
another:
not
a
polygon
of
possible
 modifications,
but
a
hierarchical
pyramid
of
importance
(Foucault,
2002:
290)



Classification,
then,
no
longer
concerned
visible
differences
within
an
order
of
 representation,
but
functional
similarities
of
vital
organs
which
were
hidden
in
 the
depths
of
the
organism
and
would
become
observable
only
through
methods
 of
comparative
anatomy
and
synthetic
reasoning.
Hence,
biological
enquiry
came
 to
be
centered
on
the
similar
ways
organic
entities
organise
their
purpose
‐
that
 is,
how
they
live
‐
through
the
interdependent
workings
of
their
functional
 structures.
It
is
through
this
decisive
change
in
the
way
of
systematically
 knowing
living
beings
–
from
ordered
representation
to
functional
explanation
–
 that
life
itself
as
an
object
of
knowledge
would
emerge.
Life
became
that
which
 the
living
being
was
structured
for,
and
at
the
same
time
that
which
transcended
 the
individual
living
being,
which
preceded
the
organism
and
organised
its
 internal
structure
as
well
as
its
relation
to
other
organisms
and
the
environment:

Living
beings,
because
they
are
alive,
can
no
longer
form
a
tissue
of
progressive
 and
graduated
differences;
they
must
group
themselves
around
nuclei
of
 coherence
which
are
totally
distinct
from
one
another,
and
which
are
like
so
 many
different
plans
for
the
maintenance
of
life
...
life
withdraws
into
the
enigma
 of
a
force
inaccessible
in
its
essence,
apprehendable
only
in
the
efforts
it
makes
 here
and
there
to
manifest
and
maintain
itself
(Foucault,
2002:
297).





In
Knowledge
of
Life,
George
Canguilhem
(2008)
develops
an
understanding
of
 this
elusive
characteristic
of
the
notion
of
life
via
an
examination
of
the
themes
of


14


vitalism
and
mechanism.3
Attempts
to
reduce
the
organism
to
a
mechanism
 cannot
account
for
the
cause
of
the
organism’s
dynamic
without
resorting
to
 either
divine
will,
or
a
man‐made
machine
as
a
model
of
comparison.
The
latter

 presupposes
the
erasure
of
the
living
source
of
energy
that
animates
the
 machine
in
order
to
function
as
an
analogy
(Canguilhem,
2008:
87).
However,
 within
the
organism
“one
observes
phenomena
of
self‐construction,
self‐ conservation,
self‐regulation,
and
self‐repair”
(Canguilhem,
2008:
88).
 Moreover,
the
mechanist
analogy
neglects
the
fact
that
organic
life
behaves
 differently
from
mechanisms
that
are
rationally
planned
and
constructed.
 Mechanist
analogies
assign
organs
a
purposefulness
that
is
troubled
by
empirical
 observation,
because
the
organism
“has
less
purpose
and
more
potentialities“
 (Canguilhem,
2008:
90).
Contrary
to
the
machine’s
pre‐planned
functionality,
life
 is
“experience,
that
is
to
say,
improvisation,
the
utilisation
of
occurences;
it
is
an
 attempt
in
all
directions“
(Canguilhem,
2008:
90).
Far
from
being
static,
 organisms
constantly
produce
variations,
irregularities,
differences:
in
short,
 anomalies.
However,
their
anomaly
can
only
be
determined
in
relation
to
the
 success
of
the
living
being
in
perpetuating
its
existence
under
changing
 conditions.
Canguilhem
argues
that
there
is
no
stable
normalcy
proper
to
life,
but
 that
it
should
instead
be
approached
as
an


























































 3
Vitalism
is
often
associated
with
philosopher
Henri
Bergson
(1998)
and
his

postulation
of
a
vital
force.
While
there
exists
a
certain
similarity
between
Bergson's
 vital
force
and
Canguilhem's
notion
of
vital
normativity,
the
focus
here
will
be
on
the
 latter,
since
the
vital
force
does
not
produce
norms,
and
it
will
be
argued
that
the
notion
 of
vital
normativity
influenced
Foucault's
understanding
of
biopolitical
normalisation
 (Muhle,
2007:
106).



15


organisation
of
forces
and
a
hierarchy
of
functions
whose
stability
is
necessarily
 precarious,
for
it
is
the
solution
to
a
problem
of
equilibrium,
compensation,
and
 compromise
between
different
and
competing
powers
...
irregularity
and
 anomaly
are
conceived
not
as
accidents
affecting
an
individual
but
as
its
very
 existence
(Canguilhem,
2008:
125).






In
the
context
of
life's
perpetual
variability,
Canguilhem
(1994:
335)
maintains
 that
the
differentiation
between
the
normal
and
the
pathological
state
of
the
 organism
cannot
be
reduced
to
the
description
of
quantitative
differences
within
 the
functioning
of
organic
processes
because
the
pathological
state
is
both
 continuous
with
the
normal
state,
in
that
the
normal
function
of
organic
 processes
is
exaggerated
or
diminished,
yet
also
discontinuous
on
the
level
of
the
 organic
totality
in
its
relation
with
the
environment.
Essentially,
the
normal
in
 the
context
of
the
organism
cannot
be
an
external
positing
in
terms
of
statistical
 measurement
but
should
be
understood
as
a
kind
of
value‐laden
judgement
 made
by
the
living
organism
itself
which
is
not
indifferent
to
its
conditions
of
 existence
(Canguilhem,
1994:
339).
On
the
aggregate
level
of
the
species,
life
 constantly
produces
variations,
and
which
of
these
variations
turns
out
to
be
 normal
or
pathological
is
a
question
not
of
the
statistical
distribution
of
organic
 features
but
of
life's
ability
to
perpetuate
itself
through
variation
within
changing
 environments.
Normal
life,
in
this
sense,
is
normative
life,
a
life
able
to
 dynamically
set
its
own
norms:

There
is
no
fact
that
is
normal
or
pathological
in
itself.
An
anomaly
or
mutation
 is
not
in
itself
pathological.
These
two
express
other
possible
norms
of
life.
If
 these
norms
are
inferior
to
specific
earlier
norms
in
terms
of
stability,
fecundity,
 or
variability
of
life,
they
will
be
called
pathological.
If
these
norms
in
the
same


16


environment
should
turn
out
to
be
equivalent,
or
in
another
environment,
 superior,
they
will
be
called
normal.
Their
normality
will
come
to
them
from
 their
normativity
(Canguilhem,
1994:
354).



In
this
context,
Canguilhem
notes
a
certain
functional
mimesis
of
vital
and
social
 normalisation.
Both
life
and
the
social
are
objects
of
normalising
processes,
 however,
the
crucial
difference
lies
in
the
respective
immanence
or
externality
of
 vital
or
social
norms.
Whereas
the
norms
governing
the
social
“must
be
 represented,
learned,
remembered,
applied“,
within
the
organism
“the
rules
for
 adjusting
the
parts
among
themselves
are
immanent,
presented
without
being
 represented,
acting
with
neither
deliberation
nor
calculation”
(Canguilhem,
 1994:
376).
 



2.3

Regulating life

Maria
Muhle
(2007)
argues
that
Canguilhem’s
understanding
of
life’s
vital
 normativity
informs
Foucault’s
notion
of
biopower.
The
relation
between
vital
 normativity
and
biopolitical
regulation
is
one
of
functional
mimesis;
whereas
 vital
normativity
is
guided
by
an
intrinsic
value
of
life
‐
its
own
perpetuation
and
 maximisation
‐
the
biopolitical
norm
is
secured
by
external
apparatuses,
and
the
 value
which
directs
their
application
‐
fostering
the
productive
potential
of
the
 population
‐
is
socially
constructed.
Thus,
in
the
concept
of
biopolitics
the
 internal
dynamic
of
vital
normativity
is
transposed
into
the
externality
of
 biopolitical
normalisation
(Muhle,
2007:
232).
Contrary
to
discipline,
biopolitical
 normalisation
does
not
posit
an
ideal
norm
which
is
subsequently
trained
into


17


the
individual
subject4
but
rather
registers
different
distributions
of
phenomena
 within
different
segments
of
the
population
through
the
application
of
statistics.
 These
empirically
occurring
differential
norms
are
then
related
to
each
other
 and,
subsequently,
biopolitical
regulation
seeks
to
“bring
the
most
unfavourable
 in
line
with
the
most
favourable“
(Foucault,
2007:
63).
Hence,
the
biopolitical
 “norm
is
an
interplay
of
differential
normalities“
(Foucault,
2007:
63)
within
a
 multiplicity
of
individuals
understood
as
a
living
population
that
brings
forth
its
 own
norms.



 In
this
sense,
the
social
is
not
simply
understood
as
analogous
to
an
organism.
 Rather
than
that,
a
specific
understanding
of
the
dynamic
development
of
life
is
 incorporated
into
the
rationality
and
technologies
of
power
which
now
act
upon
 aggregate
phenomena
by
influencing
the
milieu
within
which
the
population
 develops
these
phenomena.
These
technologies
allow
for
deviation
to
a
certain
 extent,
and
are
future‐oriented
in
the
sense
of
securing
the
homeostasis
of
the
 social.
In
short,
the
bios
in
biopower
can
be
read
as
signifying
that
not
only
does
 life
become
the
referent
of
power
but
also
that
“the
social
derives
its
functional
 model
from
the
vital“
(Muhle,
2007:
233;
author’s
translation).
 Consequently,
the
security
apparatus
as
the
paradigmatic
technique
of
biopower
 registers
the
population
as
a
living
entity
subject
to
quasi‐natural
regularities
 and
processes,
and
seeks
to
regulate
these
processes
by
intervening
on
the
 conditions
within
which
they
occur.
The
population,
then,
no
longer
appears
as
a
 multiplicity
of
subjects
but
“as
a
set
of
processes
to
be
managed
at
the
level
and
 























































 4
In
fact,
this
static
subsumption
under
an
external
norm
would
be
pathological
in

Canguilhem’s
sense
of
the
term.


18


on
the
basis
of
what
is
natural
in
these
processes“
(Foucault,
2007:
70).
Foucault
 discusses
the
significance
of
the
emergence
of
the
population
in
the
context
of
 the
notion
of
milieu:

The
milieu
is
a
set
of
natural
givens
...
and
a
set
of
artificial
givens
...
a
certain
 number
of
combined,
overall
effects
bearing
on
all
who
live
in
it.
It
is
an
element
 in
which
a
circular
link
is
produced
between
effects
and
causes,
since
an
effect
 from
one
point
of
view
will
be
a
cause
from
another
(Foucault,
2007:
21)


Foucault
situates
the
emergence
of
the
notion
of
milieu
both
in
Newtonian
 physics
and
Lamarckian
biology
(Foucault,
2007:
20).
As
Canguilhem
(2008)
 elaborates,
the
milieu
is
a
concept
with
which
the
living
organism’s
specific
 situatedness
in
the
environment
came
to
be
explained,
and
which
underwent
 various
semantic
transformations.
From
an
ethereal
fluid
assumed
to
explain
the
 problem
of
action
from
a
distance
in
Newtonian
physics,
the
milieu
later
came
to
 be
known
in
biology
as
a
concept
to
account
for
the
causal
relation
between
 organic
change
and
physical
environment.
For
instance,
in
Lamarck
it
is
the
 organism’s
need
which
mediates
the
environment’s
influence
on
the
organism’s
 evolution.
In
Darwin,
there
exists
a
primary
milieu
which
is
that
of
the
“relation
 of
one
living
being
to
others“
(Canguilhem,
2008:
105)
and
which
precedes
the
 geographical
milieu
in
its
function
of
selection.
Throughout
its
various
 permutations,
the
milieu
figured
in
early
biological
discourse
as
the
sum
of
 environmental
circumstances
which
exert
an
influence
upon
the
development
of
 life.
For
Foucault,
then,
through
acting
upon
the
milieu
within
which
“artifice
 functions
as
a
nature
in
relation
to
a
population
that,
while
being
woven
from
 social
and
political
relations,
also
functions
as
a
species“
(Foucault,
2007:
22)
the
 population
as
an
entity
subject
to
the
characteristics
of
life
is
both
produced
and
 
 19


governed
by
the
security
apparatus.
In
a
similar
sense
in
which
Darwinist
 biology
understands
the
population
as
“the
element
through
which
the
milieu
 produces
its
effects
on
the
organism“
(Foucault,
2007:
78)
the
biopolitical
 intervention
on
the
level
of
the
milieu
signifies
the
entry
of
species‐life
into
the
 techniques
of
government:


The
milieu
appears
as
a
field
of
intervention
in
which,
instead
of
affecting
 individuals
...
one
tries
to
affect,
precisely,
a
population.
I
mean
a
multiplicity
of
 individuals
who
are
and
fundamentally
and
essentially
only
exist
biologically
 bound
to
the
materiality
within
which
they
live
(Foucault,
2007:
21)


Foucault
argues
that
this
intervention
of
the
security
apparatus
consists
of
 organising
systems
of
circulations
within
the
population.
In
the
examples
of
 town,
scarcity,
and
epidemics
that
Foucault
discusses,
security
is
a
way
of


allowing
circulations
to
take
place,
of
controlling
them,
sifting
the
good
and
the
 bad,
ensuring
that
things
are
always
in
movement,
constantly
moving
around,
 continuously
going
from
one
point
to
another,
but
in
such
a
way
that
the
 inherent
dangers
of
this
circulation
are
cancelled
out
(Foucault.
2007:
65).


The
security
apparatus’
work
of
enabling
positive
circulations
ties
into
an
 emerging
knowledge
of
control
circuits.
Foucault
discusses
the
emerging
 politico‐economic
rationality
of
the
Physiocrats
as
paradigmatic
for
the
 functioning
of
the
security
apparatus.
Confronted
with
recurring
catastrophic
 food
scarcities,
their
policies
of
laissez­faire
consisted
in
removing
price
controls,
 prohibitions
of
hoarding,
and
other
barriers
to
free
trade
and
therefore
 establishing
what
could
be
described
as
negative
feedback
mechanisms
ante
 litteram
which
were
meant
to
stabilise
the
market
in
a
state
of
homeostasis


20


(Foucault,
2007:
37).
In
this
sense,
both
in
spatial
planning,
economics,
and
 medicine,
the
security
apparatus
works
to

establish
an
equilibrium,
maintain
an
average,
establish
a
sort
of
homeostasis,
 and
compensate
for
variations
within
this
general
population
and
its
aleatory
 field
(Foucault,
2004:
241)


It
is
no
coincidence
that
Foucault
frames
the
functionality
of
the
security
 apparatus
in
terms
alluding
to
the
terminology
of
cybernetics.
As
Joseph
Vogl
 (2004)
notes,
the
notions
of
self‐regulation
and
circular
causality
emerged
in
 their
early
forms
within
different
fields
of
knowledge
precisely
at
the
same
time
 for
which
Foucault
describes
the
emergence
of
biopower.
For
instance,
in
 Malthus’
writings
on
the
population,
the
causal
relation
between
population
size,
 wages
and
food
production
leads
to
self‐regulating
cycles
of
population
growth,
 food
prices
and
wage
levels.
Similarly,
the
centrifugal
governor
invented
by
 James
Watt
in
1788
regulates
the
speed
of
the
steam
machine
with
a
negative
 feedback
mechanism
(Vogl,
2004:
74‐77).
Otto
Mayr
(1986)
argues
that
 equilibrium
and
balance
as
leading
metaphors
of
historical
liberalism
eventually
 evolved
into
concepts
such
as
self‐regulation
and
feedback
control.
Clearly
 resonating
with
the
idea
of
an
invisible
hand
organising
equilibrium
without
 central
command
structures,
technical
feedback
mechanisms
gained
prominence
 and
widespread
application
at
the
same
time
as
liberal
political
economy
 between
the
18th
and
19th
century,
even
though
such
mechanisms
had
been
 known
since
antiquity
(Mayr,
1986:
xvi).
Hence,
across
different
fields
of
 knowledge
–
from
engineering
to
political
economy
‐
a
pre‐cybernetic
 understanding
of
circular
causality
and
self‐regulatory
processes
emerged
that


21


troubled
linear
mechanistic
causality,
and
led
to
a
turn
towards
the
living
 organism
as
a
model
for
understanding
processes
observed
elsewhere:

As
such,
the
organism
is
not
a
political
metaphor.
That
the
notion
of
the
 organism
was
in
such
great
demand
is
due
to
its
being
an
answer
to
the
question
 of
how
to
arrive
at
a
systematic
coincidence
of
dynamic
processes
and
stable
 structures
...
Hence,
around
1800
a
political
model
of
regulation
emerged
in
 which
the
governmental
knowledge
of
Enlightenment
and
the
principles
of
 indirect
governing
transformed
themselves
into
the
observation
of
control
 circuits
and
self‐regulatory
processes
(Vogl,
2004:
78;
author’s
translation)


In
summary,
biopower
needs
to
be
understood
in
the
context
of
an
episteme
in
 which
life
is
at
stake
as
an
object
of
knowledge,
signifying
phenomena
of
 normalisation,
self‐regulation,
and
homeostasis.
Biopower,
then,
refers
to
the
life
 of
populations
not
only
by
taking
life
as
its
object
but
also
by
incorporating
a
 specific
knowledge
of
the
dynamics
of
biological
life
into
its
biopolitical
 mechanisms,
thus
acting
upon
the
conditions
within
which
a
population
exists
in
 order
to
secure
a
state
of
homeostatic
equilibrium.

 


2.4

Cyborg life

As
Sarah
Franklin
(2000)
notes,
the
geneticisation
of
biology
in
the
20th
century
 eventually
brought
forth
a
reductionist
paradigm
wherein
life
would
figure
as
 unambiguous
informational
processes
embodied
in
the
genetic
code,
thus
giving
 way
to
its
instrumentalisation:


nature
becomes
biology
becomes
genetics,
through
which
life
itself
becomes
 reprogrammable
information
(Franklin,
2000:
190)


22


Lily
Kay
(2000)
attributes
this
reconceptualisation
of
the
fundamental
processes
 of
life
in
terms
of
information
to
the
growing
influence
of
cybernetics
as
a
meta‐ science
during
and
after
the
Second
World
War.
The
mathematical
theory
of
 information
and
systems
thinking,
together
with
the
practices
of
electronic
 computing,
served
to
create
a
representational
space
in
which
the
genetic
code
 would
figure
as
“the
site
of
life’s
command
and
control“
(Kay,
2000:
5).
Although
 not
unproblematically
and
not
without
contestation,
biologists
eventually
 adopted
the
metaphors
and
methods
of
information
discourse
and
started
 representing
genetic
processes
as
goal‐directed
computer
programs,
and
 organisms
as
self‐regulating
communication
networks
or
cybernetic
systems
 (Kay,
2000:
17).
Clearly,
this
uptake
of
informational
metaphors
in
biology
was
 helped
by
eminent
cybernetician
Norbert
Wiener’s
functional
analogy
of
living
 entities
and
certain
machines
as
both
entropy‐resisting
forms
of
organisation
 governed
by
feedback
mechanisms
(Wiener,
1989:
26).
However,
Kay
carefully
 distinguishes
between
information
theory
and
information
discourse,
the
latter
 being
understood
as
a
“large‐scale
scientific
and
cultural
shift
in
representation“
 (Kay,
2000:
16)
affecting
not
only
the
life
sciences
but
also
sociological
and
other
 discourses.
Whereas
applications
of
the
mathematical
theory
of
information
and
 systemic
models
in
biology
often
encountered
serious
conceptual
and
practical
 limitations,
the
informational
metaphor
nevertheless
facilitated
productive
 interactions
between
different
fields
of
knowledge
and
thus
organised
a
system
 of
representation
that
eventually
recast
vital
processes
in
terms
of
“information,
 texts,
codes,
cybernetic
systems,
programs,
instructions,
alphabets,
words“
(Kay,
 2000:
26).
This
newly
informationalised
knowledge
of
life
tied
into
a
wider
post‐ war
discourse
on
“automated
communication
systems
as
a
way
of


23


conceptualising
and
managing
nature
and
society“
(Kay,
2000:
127).
Cybernetics
 meshed
biological
meaning‐making
with
a
specific
nexus
of
knowledge
and
 power
pertinent
to
the
social
wherein

control
was
abstracted
and
diffused:
it
was
not
a
thing
but
a
manifestation;
not
a
 mode
of
decision
but
a
process
pervading
the
whole
system
...
control
systems
 was
redefining
the
meaning
of
social
and
biological
phenomena
(Kay,
2000:
85‐ 6;
emphasis
added)


Richard
Doyle
(1997)
highlights
the
elision
of
the
embodied
organism
within
this
 shift
from
the
modernist
biological
paradigm
described
by
Foucault
and
 Canguilhem
towards
a
postvital
paradigm
that
rests
upon
the
informational
 metaphor
of
the
genetic
code.
Life,
then,
would
no
longer
figure
as
the
 “subterranean
warmth
that
circulates“
(Foucault,
2002:
138)
between
living
 things;
rather,
its
elusive
character
was
to
be
finally
reduced
to,
as
well
as
 resolved
in,
attempts
towards
complete
descriptions
of
information
stored
in
 genetic
codes.
Hence,
the
living
organism
whose
‘interest’
in
its
conditions
of
 existence
still
figured
as
the
driving
force
of
vital
normativity
in
Canguilhem’s
 writings
“is
nothing
but
coding“
(Doyle,
1997:
17).

 This
destabilisation
of
the
boundary
between
biology
and
information
 technology,
nature
and
culture,
leads
Donna
Haraway
(1990)
to
call
for
a
cyborg
 politics.
For
Haraway,
the
figure
of
the
cyborg
serves
to
intervene
in
a
social
 reality
where
“couplings
between
machine
and
organism,
each
conceived
as
 coded
devices“
(1990:
191)
entail
a
reconsideration
of
biopolitics
within
 technologically
mediated
societies:


24


Our
dominations
don’t
work
by
medicalisation
and
normalisation
anymore,
they
 work
by
networking,
communications
redesign,
stress
management.
 Normalisation
gives
way
to
automation
...
the
discourse
of
biopolitics
gives
way
 to
technobabble
(Haraway,
1990:
194)


As
the
boundary
between
machine
and
organism
has
been
made
irrevocably
 ambiguous,
the
figure
of
the
cyborg
focuses
attention
on
a
new
“informatics
of
 domination“
(Haraway,
1990:
203)
wherein
the
strategies
of
power
are
no
 longer
based
on
the
“integrity
of
natural
objects“
(Haraway,
1990:
204)
but
 rather
regulate
flows
across
systems
of
communication.

 


2.5 Vital complexity
Nikolas
Rose
(2007)
maintains
that
the
results
of
the
sequencing
of
the
human
 genome
at
the
turn
of
the
millennium
marked
a
limitation
of
the
informational
 metaphor,
and
eventually
led
to
a
move
away
from
the
linear
determinist
 paradigm
of
20th
century
genetics
towards
a
contemporary
postgenomic
 understanding
of
life.
The
proposition
that
“life‐as‐information“
has
replaced
 “life
as
organic
unity“
(Rose,
2007:
45)
no
longer
holds,
Rose
argues,
rather
than
 that
contemporary
molecular
biology
is
shifting
from
genetic
reductionism
 shaped
by
a
doctrine
of
one‐way
flow
of
information
(from
DNA
to
RNA
to
 protein)
towards
a

postgenomic
emphasis
on
complexities,
interactions,
developmental
sequences,
 and
cascades
of
regulation
interacting
back
and
forth
at
various
points
in
the
 metabolic
pathways
that
lead
to
the
synthesis
of
enzymes
and
proteins.
And
in
 the
process
informational
epistemologies
seem
to
have
reached
their
limit;
they


25


can
no
longer
capture
what
researchers
do
as
they
represent
and
intervene
in
 the
vital
complexities
that
constitute
life
at
a
molecular
level
(Rose,
2007:
47)


Yet
this
is
not
to
say
that
cybernetics’
influence
on
the
discourse
of
biology
has
 vanished.
Rose
(2007:
48)
maintains
that
simulation
techniques
and
systemic
 metaphors
derived
from
cybernetics
enable
contemporary
explanations
in
the
 life
sciences
that
stress
the
non‐deterministic,
complex
nature
of
living
systems.
 The
heavily
computer‐dependent
modelling
approaches
of
systems
biology
 which
seek
to
understand
emergent
properties
of
living
systems,
then,
can
be
 understood
as
paradigmatic
for
the
construction
of
postgenomic
knowledges
of
 life
that
are
no
longer
rooted
in
the
organism
but
turn
towards
“simulations
of
 dynamic,
complex,
open
systems“
in
order
to
“predict
future
vital
states
and
 hence
to
enable
intervention
into
those
vital
systems
to
reshape
those
futures“

 (Rose,
2007:
16).
Rose
argues
that
under
the
emergent
regime
of
biopower,
vital
 normativity
“once
considered
to
be
inscribed
into
the
laws
of
organic
life“
(Rose,
 2007:
81)
loses
its
precedence.
Represented
in
simulations,
life
at
the
molecular
 level
is
thoroughly
opened
up
to
manipulation,
and
contemporary
biopolitics
 seek
to
“bring
a
potential
unwanted
future
into
the
present
and
make
it
 calculable“
(Rose,
2007:
86).5

 Hence,
complexity
frameworks
are
influencing
the
constitution
of
contemporary
 knowledges
of
life.
Rather
than
a
unified
and
homogeneous
theory,
complexity
 denotes
a
set
of
interdisciplinary
scientific
approaches
and
concerns
with
shared
 roots
in
cybernetics
and
systems
theory,
which
seek
to
describe
and
understand
 























































 5
However,
Rose
maintains
an
understanding
of
biopolitics
that
pertains
primarily
to
the

somatic
and
mentions
this
rationality
in
the
context
of
the
notion
of
susceptibility
in
 genomic
medicine
(Rose,
2007:
86).


26


the
common
properties
of
complex
systems
found
across
physics,
biology,
 economics,
sociology
and
other
fields.
In
the
words
of
Melanie
Mitchell,
professor
 at
the
complexity
research
centre
Santa
Fe
Institute,
a
complex
system
can
be
 broadly
understood
as

a
system
in
which
large
networks
of
components
with
no
central
control
and
 simple
rules
of
operation
give
rise
to
complex
collective
behaviour,
sophisticated
 information
processing,
and
adaptation
via
learning
or
evolution
...
a
system
that
 exhibits
nontrivial
emergent
and
self‐organising
behaviours
(Mitchell,
2009:
13)


Self‐organisation
describes
how
“organised
behaviour
arises
without
an
internal
 or
external
controller“
(Mitchell,
2009:
13)
within
complex
systems,
and
is
 observed
within
organisms
and
populations
but
also
used
to
explain
physical,
 social,
and
technical
structures.
Closely
related,
the
term
“emergence”
denotes
 how
“simple
rules
produce
complex
behaviour
in
hard‐to‐predict
ways“
 (Mitchell,
2009:
13).
Rejecting
a
reductionist
approach
that
would
explain
the
 behaviour
of
a
system
via
a
more
complete
description
of
its
single
components,
 research
on
emergence
focuses
on
how
iterative
interactions
between
a
large
 number
of
components
produce
patterns
of
organisation
on
a
macro
scale
 (Mitchell,
2009:
149).
For
instance,
cellular
automata,
simulations
consisting
of
a
 grid
of
cells
where
each
cell’s
state
is
dependent
on
the
state
of
its
immediate
 neighbours
according
to
a
cell
update
rule,
are
used
to
demonstrate
the
self‐ organised
emergence
of
structural
patterns
over
time.
Work
on
cellular
 automata
has
authorised
assertions
that
biological
processes
need
to
be
 understood
as
a
form
of
information
processing,
if
not
computation
(Mitchell,
 2009:
157).


27


Drawing
on
his
work
on
genetic
regulatory
networks,
biologist
and
complexity
 theorist
Stuart
Kauffman
(1995)
argues
for
emergence
as
the
fundamental
living
 process.
In
Kauffman’s
account,
the
theory
of
self‐organisation
figures
as
a
 candidate
for
a
“deep
theory
of
biological
order“
(Kauffman,
1995:
18)
which
 supplements
the
evolutionary
history
of
rather
accidental
variation
and
selection
 with
a
fundamental
law
invoking
a
certain
necessity
for
life
to
exist:

Life,
in
this
view,
is
an
emergent
phenomenon
arising
as
the
molecular
diversity
 of
a
prebiotic
chemical
system
increases
beyond
a
threshold
of
complexity.
If
 true,
then
life
is
not
located
in
the
property
of
any
single
molecule
‐
in
the
details
 ‐
but
is
a
collective
property
of
systems
of
interacting
molecules
...
Life,
in
this
 view,
is
not
to
be
located
in
its
parts,
but
in
the
collective
emergent
properties
of
 the
whole
they
create
(Kauffman,
1995:
24)


Hence,
emergence
displaces
evolution
as
the
fundamental
process
of
the
living:
 logically
prior
to
variation
and
selection,
organisation
emerges
spontaneously,
 and
evolution
subsequently
selects
favourable
forms
of
organisation
(Kauffman,
 1995:
9).
Kauffman
is
credited
with
the
well‐known
adage
“life
exists
at
the
edge
 of
chaos“
(Kauffman,
1995:
26)
meaning
the
hypothesis
that
evolution
favours
 living
systems
which
exist
in
a
transitory
state
between
order
and
disorder,
with
 just
the
right
amount
of
organisation
to
adapt
to
environmental
perturbations.




 Complex
systems
exhibit
nonlinear
dynamics,
i.e.
disproportional
relations
 between
causes
and
effects:
small,
local
fluctuations
can
lead
to
unpredictable
 changes
in
the
organisation
of
the
overall
system
which
either
achieves
a
higher,
 or
different,
form
of
organisation
or
slips
into
disorder.
Ilya
Prigogine
and
 Isabelle
Stengers
maintain
that
“nonlinear
reactions
...
are
virtually
the
rule
as
far
 as
living
systems
are
concerned“
(Prigogine/Stengers,
1984:
153).
For
Prigogine
 
 28


and
Stengers,
the
nonlinear
dynamics
embodied
in
the
complex
interplay
 between
autocatalytic,
crosscatalytic,
and
inhibitory
processes
regulating
 metabolism
at
the
molecular
level
serve
as
a
model
for
the
overall
“functional
 logic
of
biological
systems“
(Prigogine/Stengers,
1984:
154).
Crucially,
this
 perspective
entails
a
shift
away
from
homeostasis
towards
a
notion
of
life
that
 stresses
“the
instabilities
that
may
occur
in
far‐from‐equilibrium
conditions“6
 (Prigogine/Stengers,
1984:
154)
through
the
nonlinear
amplification
of
minor
 fluctuations
affecting
the
system.

 Furthermore,
research
on
scale‐free
networks,
i.e.
networks
whose
distribution
 of
nodes
and
links
adheres
to
a
power‐law,7
has
led
to
the
assertion
that,
in
 general,
evolution
favours
the
development
of
scale‐free
networks
because
of
 their
resilient
properties,
i.e.
their
characteristic
that
a
failure
of
random
nodes
 does
not
threaten
the
functionality
of
the
overall
network
(Mitchell,
2009:
248).8
 Within
biology,
scale‐free
networks
are
understood
to
ensure
the
robustness
of
 metabolic
pathways,
brain
function,
and
genetic
regulation
(Mitchell,
2009:
249‐ 251).
Across
biological,
social,
and
technical
networks
“many
–
perhaps
most
–
 























































 6
Within
thermodynamics,
far‐from‐equilibrium
systems
are
systems
which
exchange

energy
with
their
environment.
Prigogine
and
Stengers
maintain
that
living
systems
are
 necessarily
far‐from‐equilibrium
systems
(Priogine/Stengers,
1984:
175).

7
Scale‐free
networks
are
constituted
of
a
small
number
of
nodes
with
a
large
number
of


edges
(i.e.
hubs)
and
a
large
number
of
nodes
with
a
small
number
of
edges.
Hence,
 contrary
to
a
Gaussian
or
normal
distribution
where
the
distribution
of
events
(here:
 number
of
nodes/number
of
edges)
clusters
around
a
mean
value
producing
a
bell‐ shaped
graph,
a
power‐law
distribution
produces
a
long‐tail
graph.


8
The
emphasis
here
is
on
random,
since
a
failure
of
one
of
the
small
number
of
nodes


with
a
high
degree
of
edges
is
less
probable
than
the
failure
of
one
of
the
vast
number
of
 nodes
with
a
low
degree
of
edges.



29


real‐world
networks
that
have
been
studied
seem
to
be
scale‐free;
that
is,
they
 have
power‐law
rather
than
Gaussian
degree
distributions“
(Mitchell,
2006:
 1198).

 However,
the
spread
of
complexity
metaphors
into
biological
practices
of
 knowledge‐making
is
not
entirely
uncontested.
For
instance,
Evelyn
Fox
Keller
 doubts
the
“generality
of
scale‐free
networks“
(2005:
1060)
as
a
universal
 principle
of
nature.
Demonstrating
the
ease
with
which
real‐world
networks
can
 be
represented
both
as
adhering
to
scale‐free
and
other
distributions,
Keller
 cautions
against
too
quick
an
abstraction
from
the
specificity
of
the
phenomenon
 studied,
and
an
overly
enthusiastic
embrace
of
complexity
theories
in
biology
 (Keller,
2005:
1066).
However,
given
the
explosion
of
data
about
molecular
 processes,
Keller
notes
that
“for
the
first
time
in
recent
history,
biologists
have
 strong
incentives
to
welcome
the
cooperation
of
physical
and
mathematical
 scientists“
(Keller,
2005:
1067).
Yet
contrary
to
the
physicist’s
urge
to
posit
 fundamental
laws,
Keller
maintains
that


what
is
fundamental
in
biology
...
is
far
more
likely
to
be
found
in
the
accidental
 particularities
of
biological
structure
arising
early
in
evolution
...
than
in
any
 abstract
or
simple
laws
(Keller,
2005:
1067).


Moreover,
in
the
context
of
explaining
processes
of
life
as
a
form
of
computation,
 Mitchell
concedes
that
exactly
what
constitutes
information
and
processing
 within
biological
phenomena
“tends
to
be
ill‐defined“
(Mitchell,
2009:
169).
To
 her,
complexity
provides
first
and
foremost
a
common
language
to
facilitate
 exchange
and
collaboration
across
disciplinary
boundaries
(Mitchell,
2009:
252).



30


In
summary,
contemporary
knowledge
of
life
differs
from
the
knowledge
of
life
 underlying
Focuault’s
historical
analysis
of
biopower
at
the
threshold
of
 modernity.
Thoroughly
informationalised
and
opened
up
to
representation
and
 manipulation
by
a
computational
technology
from
which
it
in
some
cases

 becomes
discursively
indistinguishable,
contemporary
life
itself
is
explained
at
 the
intersection
of
biology
and
complexity
as
emergent
organisation
within
 nonlinear,
networked
systems
which
exist
at
the
edge
of
disorder.
This
is
not
to
 say
that
the
life
sciences
are
a
homogeneous
discourse,
and
that
this
specific
 knowledge
of
life
would
go
uncontested.
However,
as
both
Rose
(2007)
and
 Haraway
(1990)
suggest,
an
attempt
at
understanding
contemporary
biopolitics
 needs
to
take
into
account
the
transformations
that
the
knowledge
of
life
has
 undergone.

 
 
 
 
 
 
 
 
 
 


31


3. Methodology
Considering
that
biopower
denotes
a
nexus
of
power
and
knowledge
predicated
 on
life,
I
will
draw
on
Foucault’s
notion
of
critique
in
order
to
investigate
the
 mutual
support
of
systems
of
knowledge
and
technologies
of
power.
Following
 Foucault,
critique
is
a
practice
informed
by
the
desire


not
to
be
governed
like
that,
in
the
name
of
those
principles,
with
such
and
such
 an
objective
in
mind
and
by
means
of
such
procedures,
not
like
that,
not
for
that,
 not
by
them
(Foucault,
2007a:
44)



Offering
a
critique,
then,
always
already
exists
in
a
relation
to
an
established
 regime
of
power.
Yet
criticism
is
not
limited
to
invalidating
specific
practices
of
 government.
By
encompassing
the
description
and
problematisation
of
the
very
 field
circumscribed
by
the
mutual
stabilisation
of
systems
of
knowledge
and
 mechanisms
of
power,
within
which
a
specific
critique
can
be
articulated
and
 made
intelligible,
critique
can
offer
a
wider
perspective
on
transformation:

a
nexus
of
knowledge‐power
has
to
be
described
so
that
we
can
grasp
what
 constitutes
the
acceptability
of
a
system
(Foucault,
2007a:
61)


However,
in
my
case
study
I
will
ask
how
biopower
figures
within
a
specific
 technoscientific
apparatus
that
has
yet
to
materialise.
Considering
its
emphasis
 on
developing
new
technologies
as
well
as
aiming
to
manipulate
and
transform
 its
object
of
knowledge
through
these
technologies,
FuturICT
can
be
understood
 as
an
exemplary
technoscientific
endeavour
in
Paul
Rabinow’s
sense
of
the
term:

Representing
and
intervening,
knowledge
and
power,
understanding
and
 reform,
are
built
in,
from
the
start,
as
simultaneous
goals
and
means
(Rabinow,
 1992:
236)



32


In
Donna
Haraway's
rendering
of
the
notion
of
technoscience,
this
sense
of
a
 confluence
of
technology
and
science
persists,
yet
she
stresses
the
 discontinuities
of
the
present
moment
where
some
of
the
defining
binary
 dichotomies
of
modernity
are
destabilised:


Technoscience
extravagantly
exceeds
the
distinction
between
science
and
 technology
as
well
as
those
between
nature
and
society,
subjects
and
objects,
 and
the
natural
and
the
artifactual
that
structured
the
imaginary
time
called
 modernity
(Haraway,
1997:
3‐4)


For
Donna
Haraway,
a
critique
of
contemporary
science
and
technology
 engenders
understanding
them
as
cultural
practices
and
thus
focusing
attention
 on
the
“metaphors,
images,
narrative
strategies”
(Haraway,
2004:
332)
that
run
 through
the
technosciences.
Haraway
does
not
affirm
the
distinctions
between
 “science
and
politics,
or
science
and
society,
or
science
and
culture”
(Haraway,
 1997:
62).
Rather
than
that,
she
seeks
to
avoid
reduction
to
either
of
these
 categories
and
demonstrate
how
they
are
mutually
implicated
in
technoscience’s
 production
of
“what
will
count
as
nature
and
as
matters
of
fact”
(Haraway,
1997:
 50).
However,
acknowledging
the
transformations
brought
upon
the
present
 moment
by
the
technosciences,
Haraway
does
not
offer
a
purely
negative
 critique,
but
leverages
the
cultural
dimension
of
technoscience
in
order
to
invent
 counter‐narratives
such
as
the
figure
of
the
cyborg
that
might
help
to
imagine
 different
ways
of
inhabiting
the
world
(Haraway,
1990).
Hence,
understanding
 FuturICT
as
a
technoscientific
endeavour
which
is
inflected
with
a
biopolitical
 project
of
governing
living
beings
yet
which
exists
at
the
moment
exclusively
in
 discursive
form,
I
adopt
a
Foucaultian
approach
to
critique,
supplemented
by
 Donna
Haraway’s
approach
to
the
technosciences
that
focuses
attention
on
 
 33


narrative,
metaphor,
and
the
transformative
potential
of
polyphony
in
my
 analyses
of
the
textual
and
visual
materials
which
make
up
and
surround
the
 FuturICT
proposal.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 


34


4. Case study: governing life in FuturICT
4.1 FuturICT, big data and the technological zone

FuturICT
is
an
ongoing
proposal
for
a
large‐scale,
long‐term
research
project
 within
the
Future
and
Emerging
Technologies
(FET)
programme
of
the
European
 Union.
Fusing
complexity
science,
social
science,
and
information
and
 communication
technologies,
FuturICT
aims
to
leverage
real‐time
data
from
the
 internet
in
order
to
simulate
macro
social
processes,
thus
providing
 “probabilistic
short‐term
forecasts
similar
to
weather
forecasts“
for
global
social
 and
economic
developments
(FuturICT,
2012:
21).
Led
by
professors
Dirk
 Helbing
and
Steven
Bishop
of
ETH
Zürich
and
UCL
respectively,
and
involving
a
 range
of
other
European
research
institutions,
FuturICT
was
eventually
selected
 in
2011
as
a
candidate
flagship
project
of
the
FET
programme
which
promotes
 “long‐term
high‐risk
research,
offset
by
potential
breakthroughs
with
a
high
 technological
or
societal
impact“
(European
Commission,
2011:
6).
Thus,
 FuturICT
is
currently
competing
in
a
multi‐stage
selection
process
for
long‐term
 funding
over
10
years
with
a
budget
of
up
to
100
million
euros
per
year
 (European
Commission,
2011:
9).
The
decision
as
to
whether
FuturICT
will
be
 funded
by
the
European
Union
has
yet
to
be
made,
however,
the
proposal
has
 already
generated
considerable
resonance
in
the
media
(FuturICT,
2012a).

 The
project
is
rooted
in
the
quantitative
tradition
of
the
social
sciences
which
 aims
to
mathematically
model
social
processes,
particularly
in
the
computational
 approach,
which
heavily
relies
on
computer
technology
in
order
to
simulate
the
 emergence
of
macro
phenomena
from
interactions
between
agents
at
the
micro
 level
(Humphreys,
2002:
180).
However,
in
its
attempt
to
simulate
emergent
 
 35


organisation
the
project
also
intersects
with
the
scientific
programme
and
 cultural
discourse
of
artificial
life
(Kember,
2003).
 Specifically,
the
proposed
research
project
consists
of
developing
three
 interlinked
components:
the
Living
Earth
Simulator,
the
Planetary
Nervous
 System,
and
the
Global
Participatory
Platform
(FuturICT,
2012:
16).
The
Living
 Earth
Simulator
is
meant
to
“enable
the
exploration
of
future
scenarios“
 (FuturICT,
2012:
18)
through
the
application
of
agent‐based
simulation
and
 other
modelling
methods,
serving
eventually
as
a
“Policy
Wind
Tunnel“
 (FuturICT,
2012:
14)
where
different
political
decisions
can
be
tested
in
advance
 regarding
their
probable
impact
on
society.
To
this
effect,
the
simulations
 integrate
data
from
the
Planetary
Nervous
System
as
a
“global
sensor
network
...
 able
to
provide
data
in
real‐time
about
socio‐economic,
environmental
or
 technological
systems“
(FuturICT,
2012:
18).
Finally,
the
Global
Participatory
 Platform
is
meant
to
enable
participation
of
the
public
in
the
project
and
its
 simulations,
e.g.
through
taking
part
in
“serious
multi‐player
online
games“
 (FuturICT,
2012:
18)
which
supplement
the
models.



 The
FuturICT
project
is
situated
within
an
ongoing
discourse
on
the
so‐called
big
 data
phenomenon,
i.e.
the
increasing
availability
of
large
and
real‐time
datasets
 about
the
behaviour
of
human,
environmental
and
technical
systems
due
to
an
 increasing
digital
mediation
of
social
processes
and
the
spread
of
sensor
and
 tracking
technologies,
as
well
as
considerable
growth
in
computing
and
data
 analysis
capabilities.
For
instance,
a
recent
study
by
management
consultancy
 McKinsey
(2011:
17)
claims
a
sharp
increase
in
digitally
stored
data
from
50


36


exabytes
in
2000
to
300
exabytes9
in
2007
with
an
estimated
annual
growth
rate
 projected
at
40%
for
the
near
future.
Since
the
data
in
question
are
currently
 amassed
to
a
great
extent
by
private
entities,
and
the
advanced
storage
and
 analysis
techniques
figuring
under
big
data
are
developed
mainly
by
businesses
 such
as
Google
and
Amazon,
there
is
an
ongoing
debate
about
possible
 consequences
of
the
“data
deluge“
(Economist,
2010)
and
whether
there
is
a
 need
for
regulation
regarding
issues
of
privacy
and
access
to
as
well
as
 ownership
over
data
(Bollier,
2010).

 In
the
context
of
this
discourse
on
big
data,
a
number
of
social
scientists
initiated
 a
debate
regarding
the
need
for
establishing
a
computational
social
science
“that
 leverages
the
capacity
to
collect
and
analyze
data
with
an
unprecedented
 breadth
and
depth
and
scale“
(Lazer
et
al.,
2009:
3).
Aiming
for
establishing
big
 data
research
beyond
the
confines
of
corporate
research
and
development,
the
 call
broaches
topics
such
as
privacy
regulations,
research
ethics,
proprietary
and
 open
data
and
necessary
infrastructures,
and
calls
for
establishing
shared
norms
 and
standards
in
the
research
community.10
Here,
the
notion
of
the
technological
 zone
developed
by
Andrew
Barry
(2006)
can
help
to
explain
some
dynamics
of
 the
government
of
technoscientific
practices.
Barry
understands
the
 technological
zone
as
“a
space
within
which
differences
between
technical
 practices,
procedures
or
forms
have
been
reduced,
or
common
standards
have
 























































 9
One
exabyte
are
1
000
000
000
gigabytes
or
1018

bytes.

10
Perhaps
the
interest
of
Lazer
et
al.
(2009)
in
securing
access
to
the
behavioural
data


currently
amassed
needs
to
be
understood
in
the
context
of
big
data’s
presumed
 potential
to
provide
a
remedy
for
computational
social
science’s
traditional
lack
of
 reliable
real‐world
data
(Humphreys,
2002:
171).


37


been
established“
(Barry,
2006:
239).
Strategically
important
for
fostering
 research
and
development
and
ensuring
industry
development
as
well
as
shared
 and
accepted
governmental
practices,
the
technological
zone
not
only
articulates
 a
set
of
technical
practices
within
certain
spatial
boundaries
but
also
political,
 legal,
market‐oriented
and
moral
considerations
of
a
variety
of
actors
(Barry,
 2006:
243).

 The
FuturICT
proposal,
then,
explicitly
answers
the
call
for
a
big
data‐oriented
 computational
social
science
by
Lazer
et
al.
(2009)
and
promises
to
develop
 standards,
techniques
and
regulations
to
construct
a
framework
for
research
 that
considers
both
privacy
and
intellectual
property
issues,
administrative
 utility,
and
business
opportunities
related
to
big
data
within
Europe
(Bishop
et
 al.,
2011:
37).
Alongside
ensuring
the
technical
interoperability
of
big
data
 research,
the
project
seeks
to
“establish
ethical
standards
in
cooperations
with
 business
and
other
partners“
(Helbing
et
al.,
2011:
169)
as
well
as
“develop
 privacy‐respecting
data
mining
[and]
carry
out
ethical
research
and
develop
 technologies
that
increase
possibilities
for
citizens
to
participate
in
the
social,
 political
and
economic
system“
(Helbing
et
al.,
2011:
168).
Negotiating
the
 various
political,
social,
and
economic
forces
with
which
the
project
is
entangled
 through
an
emphasis
on
ethical
and
technological
standards,
FuturICT
can
be
 understood
as
taking
part
in
the
construction
of
a
European
technological
zone
 during
the
“limited
window
of
opportunity
within
which
significant
decisions
 can
be
taken”
(Barry,
2006:
242),
which
opens
with
the
emergence
of
new
and
 scarcely
regulated
technological
practices
such
as
those
surrounding
the
mining
 and
analysis
of
big
data.


38


4.2 “Simulating life on Earth and everything it relates to“
A
central
component
of
the
FuturICT
project
is
the
proposal
to
create
a
“Living
 Earth
Simulator“
(Helbing/Balietti,
2011a:
75),
that
is,
a
supercomputing‐ assisted,
integrated
simulation
of
social,
economic,
and
other
nested
and
 interacting
systems
on
a
global
scale.
In
order
to
create
this
simulator,
Helbing
 and
Balietti
call
for
a
transfer
of
methods
and
approaches
from
both
physics
and
 biology
to
the
social
sciences,
referring
to
attempts
to
model
social
phenomena
 such
as
traffic,
evacuation
scenarios,
markets
and
migration
with
technologies
 derived
from
the
natural
sciences
(2011a:
74‐5).
In
particular,
the
authors
call
 for
learning
from
those
fields
of
biology
which
are
concerned
with
the
 "organization
of
social
species,
the
immune
and
neural
systems,
etc."
 (Helbing/Balietti,
2011a:
85)
which
are
the
biological
real‐world
networks
 commonly
studied
by
complexity
scientists
(Mitchell,
2009:
169).
Considering
 that
complexity
theories
provide
explanations
for
both
natural
and
social
 phenomena,
Helbing
and
Balietti
argue
that
the
transfer
of
supercomputing
 methods
from
the
natural
sciences
to
the
social
sciences
is
just
a
matter
of
time:

It
appears
logical
that
supercomputing
will
be
ultimately
moving
on
from
 applications
in
the
natural
and
engineering
sciences
to
the
simulation
of
social
 and
economic
systems,
as
more
and
more
complex
systems
become
 understandable
and
the
required
data
become
available.
It
is
obvious
that
virtual
 three‐dimensional
worlds
are
waiting
to
be
filled
with
life
(Helbing/Balietti,
 2011a:
75)


The
virtual
world
of
the
Living
Earth
Simulator
will
be
populated
with
artificial
 agents
from
whose
repeated
local
interactions
social
structures
emerge
over


39


time
at
the
macro
level.

Agent‐based
modelling
is
considered
as
having
 reinvigorated
mathematical
modelling
in
the
social
sciences
(Helbing/Balietti,
 2011a:
72).
Within
agent‐based
modelling,
"agents
can
represent
individuals,
 groups,
organizations,
companies
etc."
(Helbing/Balietti,
2011a:
72)
whose
 characteristics,
although
methodologically
based
on
individualism,
point
beyond
 the
rational
actor
conventionally
assumed
by
liberal
political
economy,
and
take
 into
account
behavioural,
psychological,
and
emotional
factors
as
well
 (Helbing/Balietti,
2011a:
77).
Crucially,
the
focus
is
on
the
macro
level
of
an
 aggregate
of
agents
interacting
over
time,
thus
allowing
for
the
study
of
 nonlinear
behaviour
emerging
from
the
bottom
up
rather
than
constructing
a
 model
from
top‐down:

By
modelling
the
relationship
on
the
level
of
individuals
in
a
rule‐based
way
...
 agent‐based
simulations
allow
peculiar
features
of
a
system
to
emerge
without
a
 priori
assumptions
on
global
properties
(Helbing/Balietti,
2011a:
73)


Integrating
the
behavioural
data
gathered
under
the
rubric
of
big
data
with
 agent‐based
modelling,
the
authors
hope
to
build
a
“global‐scale
super
 simulator“
(Helbing/Balietti,
2011a:
77)
which
is
meant
to
move
beyond
the
 study
of
distinct
systems
towards
an
expressly
holistic
view,
thus
creating
a
 "Living
Earth
Simulator
to
simulate
the
entire
globe,
including
all
the
diverse
 interactions
of
social
systems
and
of
the
economy
with
our
environment"
 (Helbing/Balietti,
2011a:
75‐6).
Consequently,
the
process
to
be
simulated
‐
that
 is,
the
object
of
knowledge
‐
figures
as
no
less
than
"life
on
Earth
and
everything
 it
relates
to"
(Helbing/Balietti,
2011a:
76).



40


This
rather
far‐reaching
claim
to
simulate
life
on
Earth
carries
a
semantic
 dimension
beyond
a
colloquial
understanding
of
‘life’.
Specifically,
this
claim
is
 authorised
by
a
discourse
on
artificial
life
which
extends
the
notion
of
life
to
 include
social
and
artificial
systems,
and
reformulates
the
question
of
what
 constitutes
being
alive
to
a
question
of
form
independent
of
matter
by
stressing
 the
theme
of
emergence.
Katherine
Hayles
understands
artificial
life
as
a
 discourse
which
forms
part
of
a
third
wave
of
cybernetics,
moving
from
 questions
of
self‐organisation
to
evolution
and
change
(Hayles,
1999:
222).
A
 central
concept
of
artificial
life
discourse
is
emergence,
and
the
goal
of
its
 accompanying
technoscientific
practice
is
the
"creation
of
computer
programs
 instantiating
emergent
or
evolutionary
processes"
(Hayles,
1999:
225).
A
system
 showing
emergent
behaviour
evolves
complex
orders
from
simple
rules
 governing
the
interactions
between
its
components,
and
the
simulation
of
this
 process
in
computer
code,
for
instance
with
cellular
automata
as
a
precursor
of
 agent‐based
modelling,
constitutes
an
attempt
at
"building
life
from
the
'bottom‐ up'"
(Hayles,
1999:
225).
Artificial
life
secures
the
identification
of
processes
run
 in
computers
with
living
processes
through

narratives
that
map
the
programs
into
evolutionary
scenarios
traditionally
 associated
with
the
behaviour
of
living
creatures
[and]
translate
the
operations
 of
computer
codes
into
biological
analogues
that
make
sense
of
the
program
 logic
(Hayles,
1999:
225)


Having
at
its
root
the
informationalisation
of
life
enacted
within
molecular
 biology,
artificial
life
abstracts
life
from
its
material
instantiation
and
privileges
 the
form
of
emergence
over
matter,
hence
“computer
codes
...
become
natural
 forms
of
life;
only
the
medium
is
artificial“
(Hayles,
1999:
224).
In
this
sense,
a
 
 41


system
can
be
said
to
be
alive
regardless
of
whether
its
material
base
is
 information
processing
in
silicon
or
biochemical
metabolism,
if
only
it
displays
 self‐organising,
emergent,
and
evolving
behaviour
(Kember,
2003:
3).

 In
the
context
of
simulating
emergence,
culture
and
society
are
described
as
 emergent
structures.
While
the
aim
of
these
simulations
is
not
to
synthesise

 silicon‐based
life,
but
rather
study
“life‐as‐we‐know‐it“
(Kember,
2003:
133),
 they
are
nevertheless
embedded
in
an
artificial
life
discourse
which
deploys
 narratives
of
emergence
and
evolution
to
naturalise
its
object
of
knowledge.
In
 the
attempt
to
simulate
artificial
societies,
social
structure
emerges
from
the
 interactions
of
artificial
agents
(Kember,
2003:
139).
Kember
points
to
the
 problematic
naturalisation
of
the
social
within
this



computational
approach
to
the
social
sciences,
employing
an
agent‐based
 evolutionary
perspective
influenced
by
work
on
cellular
automata,
genetic
 algorithms,
cybernetics,
connectionism,
AI
and
ALife
(Kember,
2003:
138‐139)


In
the
seminal
work
on
simulating
artificial
societies
by
Epstein
and
Axtell
 (1996),
the
social
simulation
constructs
a
"neo‐Darwinist
scenario
involving
 genetic
replication,
diversity
and
selection
in
a
competitive
environment
of
 scarce
resources"
(Kember,
2003:
139).
Kember
argues
that
this
project
amounts
 to
a
naturalisation
of
social
relations
not
unlike
sociobiology's
ratification
of
 present
social
phenomena
through
the
assumption
of
evolutionary
necessity
 (Kember,
2003:
139).
However,
Epstein
and
Axtell’s
work
is
explicitly
referenced
 in
the
white
paper
laying
down
the
vision
for
the
Living
Earth
Simulator,
and
 presented
as
an
example
for
successful
applications
of
agent‐based
modelling
in
 the
social
sciences
(Helbing/Balietti,
2011a:
75).
Hence,
although
FuturICT
does


42


not
aim
to
create
silicon‐based
life,
the
project
nevertheless
partakes
in
a
 discourse
which
designates
emergent
organisation
as
the
key
characteristic
of
 life,
regardless
of
its
specific
materiality.
If
life
is
abstracted
from
its
material
and
 embodied
specificity
and
redefined
as
emergent
organisation
of
complex
 systems,
it
becomes
possible
to
claim
that
the
object
of
knowledge
to
be
 simulated
is
life
on
Earth,
if
not
life
itself.

 Emergence,
Sarah
Kember
argues,
is
the
key
concept
with
which
artificial
life
 discourse
constructs
the
naturalness
of
its
object
of
knowledge
because
it
 “confers
on
computers
...
the
power
of
evolution
–
the
power
to
evolve
life“
 (Kember,
2003:
56).
The
unpredictability
of
emergent
behaviour
transforms
 computer
simulations
into
a
legitimate
nature‐like
object
to
be
studied
in
the
 sense
of
the
natural
sciences
since
it
evokes
a
scientist‐subject
as
a
witness
of
 nature
separated
from
his
or
her
object
of
knowledge.
While
the
“constructivist
 premise
of
the
project“
(Kember,
2003:
58)
remains
evident,
the
simulation
of
 emergence
nevertheless
allows
for
claiming
a
scientific
subjectivity
which
 witnesses
the
evolution
of
a
natural
process
rather
than
insinuating
an
engineer
 constructing
a
technical
object.
Hence,
the
naturalisation
of
the
object
of
study
in
 the
discourse
and
practices
of
artificial
life
depends
on
constructing
the
subject
 position
of
the
witnessing
scientist,
resting
on
the
“generative
ability
of
 simulation,
synthetic
and
visualisation
machines
and
...
the
multiplicity
of
 potential
witnesses“
(Kember,
2003:
58).

 In
the
FuturICT
proposal
the
naturalness
of
the
object
under
study
is
secured
by
 the
metaphor
of
the
“socioscope“
which
allows
the
observer
to
witness
the


43


emergence
of
social
phenomena
in
an
experimentalist
configuration
similar
to
 the
natural
sciences:

As
agent‐based
simulations
are
suited
for
detailed
hypothesis‐testing,
one
could
 say
that
they
can
serve
as
a
sort
of
magnifying
glass
or
telescope
(“socioscope“),
 which
may
be
used
to
understand
reality
better
(Helbing/Balietti,
2011a:
73)



The
metaphor
of
the
socioscope,
imagined
as
a
scientific
instrument
allowing
for
 observation
and
experimentation
with
social
processes,
is
crucial
for
 naturalising
the
object
under
study
and
thus
authorising
the
claim
to
produce
 matters
of
fact,
that
is,
objective
knowledge.
In
their
study
on
the
origins
of
the
 experimental
method,
Shapin
and
Schaffer
(1985)
argue
that
the
establishment
 of
natural
matters
of
fact
‐
that
is,
of
deriving
knowledge
directly
from
nature,
 which
authorises
this
knowledge
as
objective
‐
rests
upon
the
elaborate
 deployment
of
material,
discursive,
and
social
technologies
in
order
to
erase
 human
agency
from
the
process
of
knowledge
production:



To
identify
the
role
of
human
agency
in
the
making
of
an
item
of
knowledge
is
to
 identify
the
possibility
of
its
being
otherwise.
To
shift
the
agency
onto
natural
 reality
is
to
stipulate
the
grounds
for
universal
and
irrevocable
assent
 (Shapin/Schaffer,
1985:
23)


Hence,
matters
of
fact
come
to
be
accepted
as
such
if
they
can
be
presented
as
 coming
into
being
without
involving
human
agency.
The
scientific
instrument
as
 a
“means
of
intellectual
production“
(Shapin/Schaffer,
1985:
26),
then,
plays
a
 crucial
role
in
the
discursive
erasure
of
human
agency
from
the
construction
of
 matters
of
fact.
The
instrument
acts
as
an
“objectifying
resource“
which
not
only
 produces
new
objects
to
be
perceived,
but
also
"stands
between
the
perceptual
 competences
of
a
human
being
and
natural
reality
itself
...
the
machine
 
 44


constitutes
a
resource
that
may
be
used
to
factor
out
human
agency
in
the
 product"
(Shapin/Schaffer,
1985:
77).
However,
in
order
to
complete
the
 production
of
matters
of
fact
it
is
necessary
to
make
the
experiment
public
and
 organise
a
community
of
witnesses
who
can
give
assent
to
the
proper
coming
 into
being
of
said
facts.
The
witnessing
of
the
experiment
"was
to
be
a
collective
 act
...
the
reliability
of
testimony
depended
upon
its
multiplicity"
 (Shapin/Schaffer,
1985:
56).
This
multiplication
of
witnesses
is
achieved
through
 public
demonstrations
before
a
community
of
scholars
on
the
one
hand,
and
 through
narrative
conventions
of
scientific
writing
which
construct
an
author‐ subject
as
a
“disinterested
observer“
(Shapin/Schaffer,
1985:
69)
on
the
other,
 which
involves
evoking
a
gesture
of
modesty:

A
man
whose
narratives
could
be
credited
as
mirrors
of
reality
was
a
modest
 man
(Shapin/Schaffer,
1985:
65)



The
figure
of
the
scientist
as
a
modest
witness
who,
presumably
removed
from
 his
embodied
being
in
the
world,
narrates
matters
of
fact
derived
directly
from
a
 nature
separate
from
the
knowing
subject,
serves
Donna
Haraway
(1997)
as
one
 focal
point
to
examine
the
political
implications
of
technoscience.
To
Haraway,
 this
constitution
of
the
boundary
between
what
counts
as
natural
matters
of
fact
 and
what,
as
cultural
or
social
enacted
through
the
figure
of
the
modest
witness,
 is
a
profoundly
political
act,
since
it
separates
the
distinct
domains
of
objective


45


knowledge
and
rational,
technical
solutions
on
the
one
hand,
and
the
field
of
 political
contestation
on
the
other
(Haraway,
1997:
24).11

 Presenting
the
Living
Earth
Simulator
as
a
socioscope,
then,
redraws
the
 boundary
between
the
social
and
the
natural.
Simulated
social
processes
can
 henceforth
be
studied
as
if
they
were
natural:
unfolding
as
emergent
 phenomena,
they
are
observed
as
processes
seemingly
separate
from
the
 observer.
Moreover,
the
theme
of
emergence
allows
the
authors
of
the
project
 proposal
to
let
a
reference
to
previous
work
on
cellular
automata
testify
to
their
 claim
that
agent‐based
modelling
reflects
reality
“in
a
natural
way“
 (Helbing/Balietti,
2011a:
72).

 Furthermore,
the
metaphor
of
the
socioscope
underlines
the
naturalness
of
the
 object
under
study
since
it
evokes
an
experimentalist
positioning
of
the
scientist
 toward
his
or
her
instrument
and
the
object
thus
studied,
constituting
emergent
 life
on
Earth
as
a
new
perceptual
object
that
belongs
to
the
natural
domain.
 However,
the
multiplication
of
witnesses
in
FuturICT
extends
beyond
a
scholarly
 community,
and
is
envisaged
as
potentially
encompassing
a
wider
public.
The
 authors
propose
to
create
“new
visualisation
centers
for
sophisticated
three‐ 























































 11
Haraway
argues
that
gender
played
a
decisive
role
in
the
constitution
of
the
modern

scientific
method.
Contrary
to
Shapin
and
Schaffer,
she
suggests
that
the
peculiar
 absence
of
women
in
early
experimentalist
configurations
needs
to
be
understood
not
 as
an
effect
of
preformed
gender
identities,
but
rather
as
constitutive
for
a
specific
 gender
relation
which
authorises
certain
knowledges
as
objective.
Modesty,
she
argues,
 came
into
being
as
a
gender
relation
wherein
"female
modesty
was
of
the
body;
the
new
 masculine
virtue
had
to
be
of
the
mind"
(Haraway,
1997:
30).
Objectivity,
then,
was
from
 the
beginning
a
prerogative
of
men
who
"were
to
be
self‐invisible,
transparent,
so
that
 their
reports
would
not
be
polluted
by
the
body"
(Haraway,
1997:
32).



46


dimensional
animations
and
the
demonstration
to
a
larger
number
of
decision‐ makers,
from
expert
panels,
over
managers
and
politicians
to
the
interested
 public“
(Helbing/Balietti,
2011a:
86).
In
FuturICT,
the
“mimetic
device“
 (Shapin/Schaffer,
1985:
62)
of
realistic
visual
representation,
which
used
to
 accompany
the
modest
witness’
description
of
the
experiment,
is
transposed
into
 the
postmodern
key
of
immersive
“virtual
reality“
(Helbing/Balietti,
2011a:
90).
 Hence,
FuturICT
invites
a
multiplicity
of
witnesses
to
immerse
themselves
into
a
 simulation
of
the
social
relations
in
which
they
are
heterogeneously
situated
–
 yet
which
they
encounter
as
a
nature
separate
from
themselves,
as
emergent
life
 itself.



4.3 Simulation, situatedness, transformation
The
Living
Earth
Simulator
will
be
calibrated
by
real‐world
data,12
and
mining
 the
abundance
of
behavioural
data
under
the
rubric
of
big
data
figures
as
an
 opportunity
to
“quickly
increase
the
objective
knowledge
about
social
and
 economic
systems“
(Helbing/Balietti,
2011:
4).
Speaking
in
an
interview
with
 Edge
magazine
about
the
increasing
availability
of
behavioural
data
and
their
use
 in
predictive
analytics,
project
head
Dirk
Helbing
notes
the
possibility
that


information
communication
technologies
eventually
will
create
a
God's‐eye
 view:
systems
that
make
sense
of
all
human
activities,
and
the
interactions
of
 people
(Helbing,
2012)


























































 12
Feeding
this
data
into
the
Living
Earth
Simulator
is
the
function
of
the
envisioned
data

mining
infrastructure
Planetary
Nervous
System,
which
is
itself
a
biological
metaphor
 resonating
with
the
informationalisation
of
life.


47


This
fantasy
of
god‐like
omniscience,
which
Helbing
is
clearly
aware
of,
and
 which
is
embodied
in
the
metaphor
of
the
socioscope
as
a
scientific
instrument
 that
yields
an
objective
view
on
society
as
a
whole,
is
criticised
by
Donna
 Haraway
as
a
fundamental
gesture
underlying
the
assertion
of
objectivity
in
the
 technosciences.
Against
“dreams
of
the
perfectly
known
in
high‐technology“
 (Haraway,
1988:
589),
Haraway
calls
for
situated
practices
of
knowledge‐making
 that
avoid
the
dichotomy
of
objectivity
and
relativism,
i.e.
practices
that,
while
 being
able
to
give
accounts
of
the
world
that
are
not
reducible
to
rhetorics
and
 language
play,
still
bear
in
mind
the
(historical)
contingencies
affecting
and
 limiting
specific
ways
of
knowing
the
world,
and
are
therefore
able
to
pave
the
 way
for
ethicopolitical
projects
of
transformation
(Haraway,
1988:
579).
Haraway
 approaches
the
possibility
of
such
an
account
through
reclaiming
the
figure
of
 the
modest
witness,
and
the
act
of
witnessing
as
an
act
of
seeing:
“The
modest
 witness
I
am
calling
for
is
one
that
insists
on
situatedness“
(Haraway,
2000:
 160).
Identifying
the
proliferation
of
technological
and
scientific
techniques
of
 visualisation
with
an
“ideology
of
direct,
devouring,
generative,
and
unrestricted
 vision,
whose
technological
mediations
are
simultaneously
celebrated
and
 presented
as
utterly
transparent“
(Haraway,
1988:
582),
Haraway
seeks
to
 reconstruct
vision
as
embodied
and
partial
perspective
through
her
insistence
 on
situatedness.
The
erasure
of
the
subject
enacted
in
scientific
practices
of
 witnessing,
supported
by
instruments
of
visualisation
as
resources
for
 objectification,
amounts
to
a
“god
trick
of
seeing
from
nowhere“
(Haraway,
1988:
 581)
whereas
Haraway
proposes
“views
from
somewhere“
(Haraway,
1988:
 590)
by
reintroducing
the
specific
subject
position
of
the
witness
into
the
act
of
 witnessing.
Taking
into
account
a
poststructuralist
understanding
of
the
subject


48


as
one
that
is
(re‐)produced
through
and
located
in
discourse
and
practices,
and
 therefore
always
partial,
unfinished
and
relational,
Haraway
proposes
the
 modest
witness
as
a
knowing
subject
that
enables
encounter
and
shared
 practices
of
knowledge‐making
precisely
because
this
subject’s
perspective
is
 necessarily
partial:


Subjectivity
is
multidimensional;
so,
therefore,
is
vision.
The
knowing
self
is
 partial
in
all
its
guises,
never
finished,
whole,
simply
there
and
original;
it
is
 always
constructed
and
stitched
together
imperfectly,
and
therefore
able
to
join
 with
another,
to
see
together
without
claiming
to
be
another.
Here
is
the
 promise
of
objectivity,
that
is,
partial
connection
(Haraway,
1988:
586)


FuturICT’s
claim
to
produce
objective
knowledge
through
omniscient
vision
does
 not
go
entirely
uncontested
in
the
complexity
science
community.
P.M.
Allen
 (2011),
a
complexity
scientist
invited
to
comment
on
the
FuturICT
proposal
in
a
 special
issue
of
the
European
Physical
Journal,
doubts
the
promise
that
 simulation
can
produce
objective
knowledge
leading
to
solutions
for
political
 problems:


whatever
model
one
creates
will
be
seen
by
the
different
agents
and
actors
as
 being
within
their
own
ethical
framework
and
so
what
may
seem
perfectly
fair
 to
one,
may
well
be
seen
as
totally
unjust
by
others
(Allen,
2011:
137).



Hence,
in
his
comments
on
the
political
promises
of
the
FuturICT
project
Allen
 problematises
a
relation
between
truth
claims,
power,
and
perspective
similar
to
 that
addressed
in
Haraway’s
call
for
situated
knowledges.
Assent
cannot
be
an
 effect
of
the
model
itself,
but
must
rest
on
negotiating
the
different
perceptions
 of
what
the
model
represents,
which
entails
engaging
in
an
encounter
between
 differently
positioned
subjects.
Firmly
objecting
to
FuturICT's
proposition
that
 
 49


contemporary
global
problems
can
be
solved
given
a
presumably
objective
 understanding
of
the
"fundamental
laws
and
processes
underlying
societies"
 (Bishop
et
al.,
2011:
34),
Allen
provides
a
take
on
the
effectiveness
of
 modelisation
that
knows
of
its
limitations
as
a
knowledge‐making
practice:

fundamentally
the
difficulties
and
grand
challenges
we
face
are
a
reflection
of
 the
human
condition
in
which
conflicts
of
interest
really
exist
and
power
 structures
operate
at
all
levels
within
the
global
system
...
this
does
not
mean
 that
we
should
not
make
models
since
that
is
the
only
thing
we
can
do
...
but
it
is
 simplistic
to
suppose
that
there
can
be
objective
knowledge
of
outcomes
and
 that
these
can
solve
the
problems
and
challenges
we
face
(Allen,
2011:
138)


However,
if
understood
in
a
non‐essentialist
way,
the
human
condition
of
which
 Allen
speaks
amounts
to
a
sedimented
history.
This
history
is
precisely
what
 marks
the
difference
between
physical,
living,
and
social
systems,
as
Isabelle
 Stengers
remarks
in
light
of
earlier
attempts
to
transfer
complexity
models
 across
disciplinary
boundaries:

Contrary
to
chemical
systems
for
which
we
are
supposed
to
take
into
account
all
 the
possibilities
of
reaction,
living
and
historical
individuals,
cells,
termites,
or
 humankind
whose
collective
behaviour
we
can
envisage
studying
are
 characterised
by
an
indefinite
multiplication
of
interactions.
Thus,
a
choice
is
 imposed
and
the
model
can
have
no
other
value
or
validity
than
that
of
this
 choice
(Stengers,
1997:
74)


This
choice,
then,
implies
a
strong
responsibility,
as
those
who
choose
how
to
 construct
the
model
"are
always
in
danger
of
ratifying
the
definition
of
a
system
 as
it
is
given
in
the
circumstances
where
they
find
it"
(Stengers,
1997:
74).
 Hence,
how
to
design
artificial
agents
and
their
interactions
from
which
social


50


structures
emerge
is
a
profoundly
political
choice
that
privileges
the
status
quo
 precisely
if
the
exactitude
and
objectivity
of
the
simulation,
and
of
the
futures
 extrapolated
from
it,
are
meant
to
stem
from
the
integration
of
behavioural
data
 from
the
Internet,
as
is
the
case
in
FuturICT:

By
selecting,
in
their
description
of
a
system,
the
interactions
that
have
been
 stabilised
and
privileged
by
the
historical,
social,
and
political
context,
they
not
 only
take
note
of
this
context
but
also
justify
it,
because
their
models
can
only
 negate
or
overshadow
the
possibility
of
other
behaviours
that
do
not
respond
to
 the
dominant
logic
(Stengers,
1997:
75)




FuturICT’s
project
of
simulating
society
as
a
living
system,
then,
runs
risk
of
 stabilising
a
given
normative
reality
through
its
insistence
on
an
objectivity
 derived
from
the
integration
of
real‐world
data
which
calibrate
a
simulation
that
 is
meant
to
emerge
social
structures
in
a
natural
way.
Promising
an
omniscient
 God’s‐eye
view
presumably
devoid
of
positioned
subjectivity
through
its
reliance
 on
technology,
this
act
of
privileging
the
given
order
over
the
possible
seems
 fraught
with
an
inherent
risk
of
conservative
bias.
Contrary
to
the
rhetoric
of
 emergence,
evolution,
and
change,
avenues
for
transformation
can
be
closed
off
 by
disregarding
embodied
subjectivity,
historical
contingency,
and
thus
 openness
to
difference
and
transformation.
 
 
 
 
 
 
 51


4.4 Imag(in)ing global life
In
FuturICT,
the
life
to
be
simulated
is
imagined
as
harbouring
potential
threats,
 thus
underscoring
a
progress
narrative
that
promises
technoscientific
salvation
 from
impending
crises
inherent
in
the
complexity
of
systems.

 In
the
animated
introduction
to
the
promotional
video
on
the
FuturICT
website
 (Fig.1),
a
globe
revolves
from
which
strings
of
binary
data
emerge
and
reach
out
 into
the
orbit,
eventually
encircling
the
globe
and
weaving
it
into
a
sphere
of
 interconnected
data
streams
in
which
words
like
‘famine’,
‘drought’
and
 ‘migration’
pop
up
(FuturICT,
2012b).
The
accompanying
music
evokes
a
sense
 of
urgency
reminiscent
of
a
news
broadcast.

 



Fig.1 Video still of revolving globe enveloped by data (FuturICT, 2012b)

52



Fig.2 The whole earth as interface (European Commission, 2011: 10)


 In
the
publication
of
the
European
Commission
announcing
FuturICT
as
an
FET
 flagship
candidate
(Fig.2),
an
illustration
shows
two
white
people,
a
man
and
a
 woman,
visible
through
a
translucent
interface
showing
a
photorealistic
three‐ dimensional
globe
framed
by
a
number
of
graphs,
charts
and
diagrams
as
well
as
 pictures
of
trees,
bark,
surf
and
cloudy
sky.
Visibly
fascinated
by
the
animation,
 the
man
points
at
the
globe
while
the
woman
stands
by
watching
docilely
 (European
Commission,
2011).


 While
the
latter
illustration
invites
a
western‐centric
and
gendered
reading
 where
white
man
figures
as
master
of
a
technologically
rendered
world,
there
 appears
to
be
a
semantic
connotation
shared
between
both
illustrations,
evoking
 a
notion
of
vulnerable
nature
remade
through
technology
and
underscoring
a
 progress
narrative
reinvigorated
by
the
notion
of
complexity.
Franklin
et
al.
 (2000)
analyse
the
globe
as
an
icon
that
"encapsulates
contemporary
 understandings
of
life
on
earth",
conveying
a
sense
of
"both
endangered
fragility


53


and
the
vitality
of
a
luminescent
life
force"
(Franklin
et
al.,
2000:
27).
 Photographed
from
space,
the
picture
of
the
whole
earth
is
a
product
of
 technological
prowess,
yet
it
also
carries
deep
ambiguities.
The
narrative
of
 technological
mastery
and
progress
it
plays
into,
elevating
the
viewer
into
a
 "God's‐eye"
(Franklin
et
al.,
2000:
27)
position,
is
counteracted
by
“fears
 concerning
future
human
survival,
and
the
technological
risks
necessary
to
 produce
such
images
in
the
first
place“
(Franklin
et
al.,
2000:
31).
The
loss
of
a
 horizon,
that
is,
seeing
the
earth
as
a
“whole,
discrete
entity“
lends
the
picture
to
 notions
of
“shared
planetary
interdependence“
insinuating
a
“newly
imaged
and
 imagined
form
of
global
unity“
(Franklin
et
al.,
2000:
28).
The
nature
thus
 imagined
through
the
image
of
the
whole
earth
is
wholly
remade
by
technology
 and
vulnerable,
but
nonetheless
natural:
it
serves,
as
Franklin
et
al.
argue,
as
a
 transformed
context
to
ground
the
interpellation
of
a
subject
that
understands
 itself
as
part
of
a
global
population
at
shared
risk
(Franklin
et
al.,
2000:
26).
 Juxtaposing
patterns
of
nature
with
graphs
and
diagrams,
and
representing
life
 on
Earth
as
emerging
informational
patterns
bearing
potential
risks,
the
visual
 narrative
surrounding
FuturICT
evokes
a
sense
of
a
technologically
permeated
 and
interconnected
world
under
threat
which
necessitates
a
God’s‐eye
view
in
 order
to
understand
and
master
it.
The
threat
inherent
in
life
is
linked
with
its
 understanding
as
a
complex
system.
In
a
white
paper
positing
the
need
for
 FuturICT,
Helbing
and
Balietti
state
that
the
“grand
challenges
facing
mankind
in
 the
21st
century”
are
potentially
catastrophic
crises
conceived
as
“systemic
 instabilities,
and
other
contagious
cascade‐spreading
processes”
 (Helbing/Balietti,
2011:
3).
Helbing
and
Balietti
situate
the
root
causes
of
 contemporary
crises
and
conflicts
such
as
the
global
financial
crash
and
the
 
 54


Greek
uprising
against
austerity
in
the
nonlinear
behaviour
of
complex
systems
 where
the
“impact
of
random
local
events
of
perturbations
becomes
systemic
in
 size”
(Helbing/Balietti,
2011:
19).

 This
sense
of
complex
systems
potentially
running
out
of
control
and
threatening
 mankind,
which
hence
must
be
mastered
by
scientific
rationality,
resonates
with
 a
gendered
imaginary
that
traditionally
informs
western
notions
of
the
relation
 between
science
and
nature.
Sandra
Harding
(1986)
argues
that
a
gendered
 imaginary
of
a
distinctly
female
nature
lies
at
the
root
of
the
modern
scientific
 worldview.
Nature
was
imagined
as
existing
in
a
tension
between
the
images
of
 the
passive
woman,
indifferent
to
the
active
male's
experimental
work
on
her
 body
‐
frequently
imagined
as
a
form
of
violation
‐
and
an
“unruly,
wild
nature”
 (Harding,
1986:
115)
figuring
as
a
threatening
and
unstable
woman
associated
 with
the
breakdown
of
order.
What
is
more,
Barbara
Creed
(1993:
1‐2)
argues
 that
in
Western
culture
the
gendered
imaginary
of
the
“monstrous‐feminine”
was
 frequently
associated
with
the
“advent
of
natural
disasters”.
This
feminine
 nature‐out‐of‐control,
then,
was
to
be
tamed
and
mastered
by
scientific
 knowledge‐making
if
man
was
to
“control
his
fate”
(Harding,
1986:
115).


 Moreover,
Donna
Haraway
(1997:
8)
maintains
that
narratives
of
impending
 apocalypse
from
which
the
technosciences
promise
salvation
lie
at
the
root
of
 modern
notions
of
progress.
The
narrative
of
FuturICT
is
no
exception
to
that.
 Given
the
threat
emanating
from
“hopelessly
complex”
(Bishop
et
al.,
2011:
34)
 systems,
and
invoking
the
experience
of
contemporary
crises
such
as
the
 financial
crash
and
recurrent
political
instabilities,
the
initiators
of
FuturICT


55


assume
a
“moral
obligation”
(FuturICT,
2012:
3)
to
understand
and
manage
 complexity:

Quick
scientific
progress
is
needed
in
order
to
learn
how
to
efficiently
stop
the
 on‐going
cascading
effects
and
downward
trends
(FuturICT,
2012:
3)


Understanding
complexity,
then,
is
infused
with
a
moral
discourse
where
a
lack
 of
awareness
of
complexity
leads
to
catastrophe.
Hence,
although
in
FuturICT
the
 modern
metanarrative
of
scientific
progress
(Lyotard,
1984)
appears
to
be
 tainted,
if
not
broken,
by
an
acknowledgment
of
the
profusion
of
risks
 accompanying
the
development
of
technological
systems,
the
narrative
is
 eventually
resurrected
and
stabilised
with
the
promise
of
a
new
scientific
 perspective
and
technologies
for
understanding
and
managing
complexity.
In
 fact,
as
Isabelle
Stengers
(1997)
notes,
the
discourse
on
complexity
tends
to
 stress
themes
of
crisis
to
establish
complexity
as
a
new
paradigm:

What
seems
to
happen
is
that
themes
of
world
crisis,
and
a
questioning
of
the
 presuppositions
that
allowed
us
to
underestimate
the
crisis
or
to
think
of
it
as
 epiphenomenal,
are
interwoven
with
the
themes
of
a
‘new
rationality’
(Stengers,
 1997:
4)


Thus,
the
abounding
themes
of
“instability,
crisis,
differentiation,
catastrophes,
 and
impasses”
(Stengers,
1997:
4)
in
complexity
discourse
come
to
underscore
a
 strategic
intervention
to
replace
an
old
paradigm
with
a
new
one,
and
thereby
 postulate
the
prospect
of
solving
impending
problems
that
hitherto
could
not
be
 solved.
Repeating
the
gesture
of
scientism,
then,
the
science
of
complexity
“is
 heralded
as
solution
to
ethicopolitical
problems”
(Stengers,
1994:
4).

 


56


4.5 Biomimetic government
In
a
working
paper
on
systemic
risks
for
the
Santa
Fe
Institute,
project
head
Dirk
 Helbing
(2009)
outlines
the
problems
of
control
in
complex
systems,
mainly
 among
them
those
issues
stemming
from
their
nonlinearity,
that
is,
 unpredictability
in
the
long
run,
and
the
disproportionality
of
cause
and
effect
 wherein
“big
changes
may
have
small
or
no
effects”
but
“small
changes
may
 cause
a
sudden
regime
shift”
if
the
system
is
near
a
critical
point
(Helbing,
2009:
 5).
The
world
Helbing
envisions
is
far
from
equilibrium
and
might
best
be
 characterised
as
“always
already
out
of
control”
(Kember,
2003:
117),
thus
 demanding
novel
strategies
of
governing:

in
a
strongly
varying
world,
strict
stability
is
not
possible
anymore
...
a
paradigm
 shift
towards
more
flexible,
agile,
adaptive
systems
is
needed,
possible,
and
 overdue
(Helbing,
2009:
8)



Considering
this
unstable
world,
Helbing
maintains
that
“complex
systems
 cannot
be
controlled
in
the
conventional
way”
but
should
rather
be
managed
by
 “strengthen[ing]
the
self‐organisation
and
self‐control
of
the
system”
(Helbing,
 2009:
6).
Managing
complexity,
then,
implies
taking
into
account
a
certain
 knowledge
of
the
momentum
and
intrinsic
dynamics
of
complex
systems
in
 order
to
stimulate
the
emergence
of
favourable
system
behaviour,
that
is,
to
 “work
with
the
system
rather
than
against
it”
(Helbing,
2009:
8).

 Of
course,
this
emphasis
on
order
emerging
from
the
interactions
of
individuals
 in
a
self‐organised
way
is
reminiscent
of
the
invisible
hand
of
liberalism.
 However,
while
Helbing
acknowledges
the
metaphor
of
the
invisible
hand
as
 prefiguring
the
notion
of
self‐organisation,
he
emphasises
that
self‐organisation


57


need
not
necessarily
lead
to
optimal
equilibrium,
rather
equilibrium
needs
 “properly
chosen
[interaction]
rules”
(Helbing,
2009:
9).
One
of
the
techniques
 Helbing
proposes
to
achieve
this
is
mechanism
design
which
attempts
to
 influence
the
interactions
between
system
elements
in
order
for
favourable
 behaviour
to
emerge
from
the
bottom
up:

regulations
should
not
specify
what
exactly
the
system
elements
should
do,
but
 set
bounds
to
actions
(define
'rules
of
the
game'),
which
give
the
system
 elements
enough
degrees
of
freedom
to
self‐organize
good
solutions
(Helbing,
 2009:
6)


Helbing
et
al.
(2009)
provide
an
example
for
such
a
mechanism
in
a
paper
on
a
 biomimetic
logistics
system
taking
its
cues
from
biological
transport
systems
on
 the
molecular
level.
Drawing
an
analogy
between
human‐made
traffic
systems
 and
cellular
metabolism
as
both
complex
systems,
they
identify
natural
success
 strategies
to
be
applied
to
logistics
in
general:

The
underlying
success
strategies
include
extensive
recycling,
self‐organization,
 self‐assembly,
and
self‐repair.
These
properties
are
largely
based
on
local
 interactions,
i.e.
on
decentralised
control
approaches.
(Helbing
et
al.,
2009:
538)



The
control
strategy
thus
derived
from
biological
knowledge
and
put
in
practice
 in
traffic
control
results
in
bottom‐up
self‐organisation
which
is
conceived
as
 markedly
different
from
top‐down
feedback
control,
and
is
held
to
produce
a
 much
more
efficient
and
robust
system
performance
that
is
able
to
tolerate
 fluctuations.
In
essence,
the
mechanism
consists
of
a
continuously
measuring
 prediction
system
of
traffic
flow,
which
regulates
the
traffic
light
system
so
that
 “traffic
streams
control
the
traffic
lights
rather
than
the
other
way
around”
 (Helbing
et
al.,
2009:
543).
The
proposed
mechanism
seems
to
share
some
of
its
 
 58


functional
logic
with
the
process
of
modulation
which
Gilles
Deleuze
(1992)
 describes
in
his
essay
on
the
societies
of
control.
Modulation
governs
flexibly
 “like
a
sieve
whose
mesh
will
transmute
from
point
to
point”
(Deleuze,
1992:
4).
 Analysing
the
interplay
between
data
collection
and
predictive
analytics
in
 digital
environments
as
a
form
of
modulation,
David
Savat
states
that
“it
is
now
 the
environment
that
adjusts
to
you,
and
does
so
in
advance”
(Savat,
2009:
57).


 Crucially,
the
bottom‐up
approach
to
self‐organisation
in
this
case
does
not
 simply
mean
laissez­faire,
but
rather
aims
to
shape
the
conditions
under
which
 favourable
processes
of
self‐organisation
can
occur.
The
authors
derive
from
the
 functional
principles
of
biological
“autocatalytic
and
inhibitory
processes”
a
 regulatory
technique
of
“gentle
interference”
(Helbing
et
al.,
2009:
544)
which
 means
that


if
the
interactions
between
the
system
elements
are
suitable,
only
small
 feedback
signals
are
necessary
to
reach
the
desired
behaviour
(Helbing
et
al.,
 2009:
544)



In
their
white
paper
on
FuturICT,
Helbing
and
Balietti
(2011a:
82)
express
the
 conviction
that
similar
biomimetic
technologies
might
be
applied
to
the
 governance
of
complex
social
and
economic
systems
as
well,
and
envision
 FuturICT
as
researching
and
developing
such
techniques.

 This
attempt
at
increasing
robustness
through
stimulating
self‐organisation
is
 further
emphasised
in
FuturICT’s
stated
goal
of
increasing
‘systemic
resilience
of
 the
society’
(Bishop
et
al.,
2011:
36).
Resilience
is
the
capacity
of
a
networked
 system
to
withstand
external
or
internal
perturbations,
and
closely
related
to
the
 different
network
topologies
described
in
graph
theory
wherein


59


scale‐free
networks
are
not
resilient
to
failures
of
their
hubs,
and
are
thus
quite
 vulnerable
to
accidental
failures
or
targeted
attacks
(Mitchell,
2006:
1200)


Increasing
the
resilience
of
a
network
means
reducing
its
susceptibility
to
 cascading
failure
(Helbing,
2009:
9).
Cascading
failure
denotes
that
faults
of
 single
critical
nodes
can
have
nonlinear
system‐wide
effects,
threatening
the
 network
as
a
whole
if
a
certain
threshold
is
reached
(Helbing,
2009:
3).
In
a
 certain
sense,
the
threat
to
the
system
is
inherent
in
its
complexity:
aside
from
 external
shocks
which
might
put
the
network
under
stress,
the
nonlinear
 amplification
of
internal
fluctuations
entails
that
“some
endogeneous
processes
 can
automatically
drive
the
system
towards
a
critical
state”
(Helbing,
2009:
4).

 Understood
as
co‐evolving
with
its
environment,
the
system’s
capacity
to
 maintain
organisational
complexity
despite
perturbations
adds
the
dimension
of
 evolutionary
adaptability
to
the
notion
of
resilience.
For
Helbing
and
Balietti,
 resilience
and
evolutionary
adaptation
represent
an
argument
for
the
 maintenance
of
social
diversity,
protection
of
minorities,
and
strong
privacy
 controls
as
a
precondition
for
social
systems
to
adapt
to
an
unstable
 environment:

As
is
known
from
evolutionary
theory,
innovation
thrives
best
when
there
is
a
 large
diversity
of
variants.
In
other
words,
diversity
or
‘pluralism’
is
the
motor
 driving
innovation.
Would
we
just
orient
ourselves
at
the
majority
or
what
is
 ‘normal’
(the
average),
the
innovation
rate
and,
with
this,
adaptability
to
 changing
(environmental)
conditions
would
be
poor
(Helbing/Balietti,
2011:
27)


Grounding
this
analogy
between
social
change
and
biological
evolution
firmly
in
 a
liberal
democratic
teleology,
the
authors
proceed
to
argue
that
a
lack
of
social


60


diversity,
and
thus
adaptability,
is
“actually
the
reason
why
totalitarian
regimes
 are
sooner
or
later
destined
to
fail”
(Helbing/Balietti,
2011:
27).

 Yet
diversity
per
se
is
not
held
to
be
sufficient.
Rather
than
letting
evolution
run
 its
way,
Helbing
and
Balietti
envision
a
“paradigm
shift
in
decision‐making”
 which
involves
using
the
Living
Earth
Simulator
to
experimentally
test
the
 probable
outcomes
of
a
set
of
policy
options
before
implementing
them
 (Helbing/Balietti,
2011a:
86).
Here,
simulation
precedes
implementation,
and
 genetic
algorithms
are
envisaged
to
create
and
test
different
policy
options
 before
they
are
implemented
in
reality.
Genetic
algorithms
are
computer
 programmes
that

employ
Darwinian
principles
of
evolution
in
order
to
increase
the
fitness
of
 successive
generations
of
algorithms,
where
fitness
is
a
measure
of
success
in
 solving
specific
computational
problems
(Kember,
2003:
123)



However,
which
of
the
new
policies
generated
by
genetic
algorithms
will
enjoy
 system‐wide
implementation
is
again
determined
by
their
evolutionary
success,
 which
is
derived
from
measuring
the
policy’s
performance
in
an
initially
limited
 real‐world
domain
(Helbing/Balietti,
2011a:
86).
Thus,
the
authors
imagine
the
 governmental
process
as
a
guided
evolution
over
multiple
steps.
Variations
are
 designed
with
the
aid
of
genetic
algorithms,
then
tested
locally
as
to
their
fitness
 and,
if
proven
successful,
applied
to
the
whole
population.
Helbing
and
Balietti
 frame
their
reimagined
governmental
process
in
evolutionary
terms,
promising
a
 method
of
securing
political
success
authorised
by
nature
itself:


In
some
sense,
this
approach
to
implementing
innovations
is
more
along
the
 lines
of
how
nature
seems
to
work.
In
fact,
the
described
approach
basically


61


follows
the
principles
of
evolution,
with
the
main
difference
that
some
of
the
 testing
of
new
solutions
happens
in
the
virtual
rather
than
the
real
world,
and
 only
the
best
variants
are
deployed
in
reality
(Helbing/Balietti,
2011a:
86)


Hence,
as
a
consequence
of
the
naturalisation
of
the
social
as
a
complex
living
 system
embodied
in
the
Living
Earth
Simulator,
a
set
of
regulatory
techniques
 are
proposed
that
establish
a
specific
governmental
architecture
which
 incorporates
the
dynamics
of
life.
Regulating
the
social
according
to
the
intrinsic
 characteristics
of
living
systems
which
are
no
longer
securely
homeostatic,
but
 rather
prone
to
sudden
change,
these
technologies
seek
to
stimulate
self‐ organisation
through
the
design
of
mechanisms
that
govern
social
interactions
in
 order
to
produce
the
emergence
of
favourable
organisation,
increase
systemic
 resilience
by
rendering
diversity
a
resource
necessary
for
adaptation,
and
apply
 evolutionary
principles
in
order
to
govern
social
change.

 
 
 
 
 
 
 
 
 
 
 
 62


5. Conclusion: biopolitical simulations
Writing
on
the
cultural
significance
of
nonlinear
dynamics,
Katherine
Hayles
 argues
that
the
latter
half
of
the
twentieth
century
witnessed
a
widespread
shift
 towards
“exploring
the
possibilities
of
disorder”
(Hayles,
1990:
xi).
Whereas
the
 scientific
study
of
nonlinear
systems
could
only
take
off
after
more
computers
 became
available
as
scientific
instruments
in
the
1960s
and
1970s,
through
 environmental
and
economic
crises
this
period
already
produced
the
“growing
 realisation
that
the
world
itself
has
become
(or
already
was)
a
complex
system
 economically,
technologically,
environmentally”
(Hayles,
1990:
5).
Considering
 the
“rapid
development
of
information
technologies”
in
the
subsequent
decades,
 an
“increasing
awareness
of
global
complexities,
and
consequent
attention
to
 small
fluctuations”
(Hayles,
1990:
9)
provides
the
cultural
frame
for
the
question
 of
how
order
might
be
achieved
in
a
world
rendered
fundamentally
unstable.

 Complexity,
then,
poses
a
question
to
which
life
is
the
answer:
how
to
govern
 living
beings
in
an
“unstable
world
where
small
causes
can
have
large
effects”
 (Prigogine/Stengers,
1984:
260).
Whereas
the
specific
knowledge
of
life
in
the
 19th
century
provided
a
functional
model
for
organising
circulations
within
a
 population,
in
FuturICT
life
figures
as
an
answer
to
the
question
driving
the
 complexity
sciences,
that
is,
“how
to
cope
with
a
complex
environment
...
by
 achieving
a
kind
of
poised
state
balanced
on
the
edge
of
chaos”
(Kauffman,
1995:
 86).
Within
the
knowledges
of
life
which
Canguilhem
discusses
in
the
context
of
 19th
century
biology,
the
living
organism
figured
as
a
homeostatic
apparatus
 maintaining
its
equilibrium
in
negotiation
with
its
milieu
(Canguilhem,
1994:

 85).
Moreover,
Foucault
shows
how
the
themes
of
homeostasis
and
equilibrium


63


played
a
decisive
role
in
the
establishment
of
biopolitics
at
the
threshold
of
 modernity;
a
biopolitics
which
aimed
to
maintain
a
stable
equilibrium
within
a
 population
by
incorporating
into
its
regulatory
techniques
a
specific
knowledge
 of
life
as
a
normative
process
tending
towards
homeostasis
(Foucault,
2007:
37;
 2004:
241).
While
Canguilhem
(1994:
86‐7)
acknowledges
the
continuity
 between
the
19th
century
knowledge
of
the
living
being
as
a
homeostatic
entity

 and
20th
century
cybernetic
theories
of
self‐regulation,
he
formulates
a
problem
 that
should
later
be
addressed
by
theories
of
complexity.
If
the
organism
is
 conceived
as
a
cybernetic
apparatus
continually
self‐regulating
in
order
to
resist
 the
general
tendency
towards
the
disorder
of
entropy,
how
to
account
for
the
 existence
of
ordered
living
beings
in
the
first
place?
Or,
in
Canguilhem’s
own
 words:
“Is
organisation
order
amidst
disorder?”
(Canguilhem,
1994:
87).
This
is
 precisely
the
question
which
theories
of
emergence
seek
to
answer
by
 investigating
the
spontaneous
self‐organisation
of
order
in
which
“entropy‐rich
 systems
facilitate
rather
than
impede
self‐organisation”
(Hayles,
1990:
9)
and
 “nature
...
can
renew
itself
precisely
because
it
is
rich
in
disorder
and
surprise”
 (Hayles,
1990:
11).
Disorder,
then,
becomes
the
condition
of
possibility
of
order,
 and
the
self‐organisation
of
living
systems
the
key
for
maintaining
order
amidst
 a
turbulent
environment,
thus
providing
a
functional
model
for
governing
 complex
systems.

 Hence,
in
FuturICT
biopower
persists
as
a
modality
of
power
which
governs
 living
beings
according
to
a
specific
incorporated
knowledge
of
life.
Life
becomes
 an
object
of
knowledge
within
simulations
which
employ
agent‐based
modelling
 in
order
to
study
the
self‐organised
emergence
of
social
structures,
and
a
 governable
domain
through
the
development
of
techniques
which
seek
to
 
 64


regulate
complex
systems
according
to
their
functional
dynamics.
Whereas
 FuturICT’s
depoliticised
technological
rationality
is
not
a
novelty
in
the
context
 of
global
simulations
(Ashley,
1983),
the
specificity
of
the
project,
however,
lies
 in
the
instrumentalisation
of
life
for
authorising
mechanisms
of
power.
The
 proposed
technologies
for
governing
complex
systems
are
not
only
derived
from
 the
observation
of
biological
processes,
but
are
legitimised
precisely
because,
in
 the
discourse
informing
FuturICT,
social
systems
figure
as
alive
in
a
similar
sense
 as
biological
systems:
since
both
can
be
understood
as
nonlinear
systems
which
 evolve
forms
of
organisation
that
exist
always
at
the
edge
of
disorder,
the
 governance
of
living
beings
becomes
a
matter
of
managing
complexity
in
order
 to
preserve
the
precarious
stability
of
a
threatened
social
order.
Paraphrasing
 Foucault's
(2007:
22)
remarks
on
the
relation
between
nature
and
artifice
in
 biopolitics,
then,
one
could
say
that
the
biomimetic
technologies
envisioned
in
 FuturICT
function
as
a
nature
in
relation
to
a
population
that,
while
being
woven
 from
social
relations,
also
functions
as
a
living
system.

 However,
the
specific
biopolitical
mechanisms
informed
by
the
dynamics
of
life
 have
changed
from
establishing
self‐regulating
circulations
to
stimulating
self‐ organisation,
and
from
an
orientation
towards
life’s
production
of
norms,
and
 thus
normalisation,
to
an
increased
attention
towards
the
possible
impact
of
 local
events
understood
as
“fluctuations
that
threaten
[systemic]
stability”
 (Prigogine/Stengers,
1984:
188).
These
mechanisms
are
technical
solutions
to
 ethicopolitical
problems
which
gain
authority
precisely
because
the
problems
 they
are
deemed
to
solve
have
been
naturalised
before.
In
the
context
of
 sociobiology
and
its
recent
iterations
such
as
evolutionary
psychology,
Kember
 argues
that
the
act
of
naturalising
formerly
denaturalised,
contingent
social
 
 65


relations
such
as
gender
is
profoundly
political
because
it
depoliticises
through
 its
justification
of
the
status
quo,
and
thus
“absolves
us
of
the
responsibility
to
 act”
(Kember,
2003:
34).
Actualising
a
similar
logic,
FuturICT
narrows
the
space
 of
politics
down
to
a
matter
of
applying
a
technological
rationality
prefigured
in
 the
scientific
project
of
attaining
mastery
over
nature,
which
neglects
the
 responsibility
for
reflecting
on
and
transforming
the
historically
stabilised
social
 relations
which
bring
forth
the
global
problems
which
FuturICT
seeks
to
address.

 Yet
the
reformulation
of
life
within
complexity
theories
could
also
provide
an
 avenue
for
different
forms
of
knowledge‐making.
In
her
discussion
of
the
 continuing
relevance
of
Canguilhem’s
engagement
with
vitalism,
Monica
Greco
 suggests
that
the
specificity
of
life
as
an
object
of
knowledge
resists
totalising
 epistemologies
because
“the
science
of
life
is,
itself,
a
manifestation
of
the
activity
 of
the
living,
a
manifestation
of
its
own
subject
matter”
(Greco,
2005:
18).
 Following
Canguilhem,
the
individual
can
gain
knowledge
of
its
environment
–
 its
milieu
–
only
insofar
as
it
is
always
already
situated
within
this
environment
 as
a
specific
living
being,
experiencing
it
as
its
conditions
of
existence
(Greco,
 2005:
19).
The
situatedness
of
the
knowing
subject
in
the
world
is
thus
 introduced
into
the
relation
between
subject
and
object
of
knowledge.
Rather
 than
assuming
an
ontologically
stable,
knowing
individual
separate
from
its
 object
of
knowledge,
the
insistence
on
the
coupling
of
organism
and
milieu,
and
 the
unstable
boundary
between
these
two,
implicates
a
partial
and
relational
 perspective
more
akin
to
Haraway’s
situated
knowledges
(Greco,
2005:
20‐1).

 Interestingly,
Greco
delineates
a
certain
continuity
in
Canguilhem’s
and
 Haraway’s
epistemologies
and
Isabelle
Stengers’
philosophical
interpretation
of


66


the
sciences
of
complexity.
Given
the
reformulation
of
the
living
as
a
complex
 system,
Stengers
(1997)
grounds
her
plea
for
a
different
ethos
of
thought
in
the
 temporality
of
nonlinear
systems.
If
nonlinearity
signifies
a
high
sensitivity
to
 slight
differences
in
initial
conditions,
then
the
position
of
the
ideal,
god‐like
 observer
who
might
undertake
the
perfect
initial
measurements
presupposed
by
 classical
physics
in
order
to
predict
the
behaviour
of
a
system
is
simply
not
 attainable
precisely
because
of
the
temporal
and
spatial
situatedness
of
both
 system
and
observer
(Stengers,
1997:
39‐40).
Contrary
to
the
technoscientific
 and
biopolitical
rationality
enacted
in
FuturICT,
then,
understanding
life
in
terms
 of
complexity
could
as
well
yield
a
different
practice
of
knowledge‐making
 wherein
life
is
not
instrumentalised
for
stabilising
power
relations:
a
practice
 that
assumes
a
partial
perspective
wherein
omniscience
is
futile,
as
is
a
fantasy
 of
prediction
and
control,
for
in
the
study
of
nonlinearity,
and
thus
life,
there
can
 only
be
situated
subjects.
 
 
 
 
 


67


Bibliography
Agamben,
G.
(1998)
Homo
sacer.
Sovereign
power
and
bare
life.
Stanford:
 Stanford
California
Press.
 Allen,
P.
(2011)
'Comments
by
P.
Allen
on
the
Visioneer
white
papers
by
D.
 Helbing
and
S.
Balietti'
European
Physical
Journal
Special
Topics
195,
165‐186.

 Ashley,
R.
(1983)
'The
eye
of
power:
the
politics
of
world
modeling'
International
 Organization
37(3),
495‐535.
 Barry,
A.
(2006)
'Technological
zones'
European
Journal
of
Social
Theory
9(2),
 239‐253.
 Bergson,
H.
(1998)
Creative
evolution.
Mineola:
Dover.


 Bishop,
S.
et
al.
(2011)
'The
European
Future
Technologies
Conference
and
 Exhibition
2011.
FuturICT:
FET
Flagship
Pilot
Project'
Procedia
Computer
Science
 7,
34‐38.
 Bollier,
D.
(2010)
The
promise
and
peril
of
big
data
[Online].
Available
at:
 http://www.aspeninstitute.org/publications/promise‐peril‐big‐data
(Accessed:
 27
August
2012)

 Canguilhem,
G.
(2008)
Knowledge
of
life.
New
York:
Fordham
University
Press.
 Canguilhem,
G.
(1994)
A
vital
rationalist.
New
York:
Zone
Books.
 Creed,
B.
(1993)
The
Monstrous­Feminine.
Film,
feminism,
psychoanalysis.
London
 and
New
York:
Routledge.

 Deleuze,
G.
(1992)
'Postscript
on
the
societies
of
control'
October
59,
3‐7.

 Doyle,
R.
(1997)
On
beyond
living.
Rhetorical
transformations
of
the
life
sciences.
 Stanford:
Stanford
University
Press.
 Economist
(2010)
'The
data
deluge',
25
February.
Economist
[Online].
Available
 at:
http://www.economist.com/node/15579717
(Accessed:
27
August
2012).

 Epstein,
J.
&
Axtell,
R.
(1996)
Growing
artificial
societies.
Social
science
from
the
 bottom­up.
Cambridge,
Mass.:
MIT
Press.



68


European
Commission
(2011)
Building
FET
flagships:
a
world­class
scientific
 endeavour.
Brussels:
Directorate
General
Information
Society
and
Media.

 Foucault,
M.
(2008)
The
history
of
sexuality
Vol.1:
the
will
to
knowledge.
 Camberwell:
Penguin.
 Foucault,
M.
(2008a)
The
birth
of
biopolitics:
lectures
at
the
Collège
de
France,
 1978­1979.
New
York:
Palgrave
Macmillan.
 Foucault,
M.
(2007)
Security,
territory,
population:
lectures
at
the
Collège
de
 France,
1977­78.
Basingstoke:
Palgrave
Macmillan.
 Foucault,
M.
(2007a)
'What
is
critique?',
in
Lotringer,
S.
(ed.)
The
politics
of
truth.
 Los
Angeles:
Semiotexte,
41‐81.
 Foucault,
M.
(2004)
Society
must
be
defended:
lectures
at
the
Collège
de
France,
 1975­76.
London:
Penguin.
 Foucault,
M.
(2002)
The
order
of
things.
New
York:
Routledge.
 Foucault,
M.
(1991)
Discipline
and
punish:
the
birth
of
the
prison.
London:
 Penguin.
 Franklin,
S.
et
al.
(2000)
Global
nature,
global
culture.
London:
Sage.
 Franklin
S.
(2000)
'Life
itself',
in
Franklin,
S.
et
al.
(eds.)
Global
nature,
global
 culture.
London:
Sage,
188‐224.
 FuturICT
(2012)
Global
computing
for
our
complex
world
[Online].
Available
at:
 http://www.futurict.eu/sites/default/files/docs/files/FuturICT_32p_Project%2 0Outline%20WITH%20LHS.pdf
(Accessed:
27
August
2012)
 FuturICT
(2012a)
Response
in
the
Media.
Available
at:
 http://www.futurict.eu/response‐in‐the‐media
(Accessed:
27
August
2012)

 FuturICT
(2012b)
FuturICT
documentary.
Available
at:
 http://vimeo.com/29480781
(Accessed:
27
August
2012).
 Greco,
M.
(2005)
'On
the
vitality
of
vitalism'
Theory,
Culture
&
Society
22(1),
15‐ 27.
 Hacking,
I.
(1982)
'Biopower
and
the
avalanche
of
printed
numbers'
Humanities
 in
Society
5(3&4),

279‐295.




69


Haraway,
D.
(2004)
'There
are
always
more
things
going
on
than
you
thought!

 Methodologies
as
thinking
technologies’,
Haraway,
D.
(ed.)
The
Haraway
Reader.

 New
York
and
London:
Routledge,
332‐342.
 Haraway,
D.
(2000)
How
like
a
leaf.
An
interview
with
Thyrza
Nichols
Goodeve.
 New
York
and
London:
Routledge.
 Haraway,
D.
(1997)
 Modest_Witness@Second_Millennium.FemaleMan_Meets_OncoMouse.
New
York
 and
London:
Routledge.
 Haraway,
D.
(1990)
'A
manifesto
for
cyborgs:
science,
technology,
and
socialist
 feminism
in
the
1980s',
in
Nicholson,
L.
(ed.)
Feminism/Postmodernism.
London
 and
New
York:
Routledge,
191‐233.
 Haraway,
D.
(1988)
'Situated
knowledges:
the
science
question
in
feminism
and
 the
privilege
of
partial
perspective'
Feminist
Studies
14(3),
575‐599.
 Harding,
S.
(1986)
The
science
question
in
feminism.
Ithaca:
Cornell
University
 Press.
 Hayles,
K.
(1999)
How
we
became
posthuman.
Virtual
bodies
in
cybernetics,
 literature,
and
informatics.
Chicago
and
London:
The
University
of
Chicago
Press.
 Hayles,
K.
(1990)
Chaos
bound.
Orderly
disorder
in
contemporary
literature
and
 science.
Ithaca:
Cornell
University
Press.
 Helbing,
D.
(2012)
'A
new
kind
of
socio‐inspired
technology'
Edge,
19
June
 [Online].
Available
at:
http://edge.org/conversation/a‐new‐kind‐of‐social‐ inspired‐technology
(Accessed:
27
August
2012)
 Helbing,
D.
(2009)
'Systemic
Risks
in
Society
and
Economics'
[Online].
Available
 at:
http://www.santafe.edu/research/working‐ papers/abstract/9596e5a57d1f9b7e8fcc289f118555ce/
(Accessed:
27
August
 2012).

 Helbing,
D.
&
Balietti,
S.
(2011)
'From
social
data
mining
to
forecasting
economic
 crises'
European
Physical
Journal
Special
Topics
195,
3‐68.
 Helbing,
D.
&
Balietti,
S.
(2011a)
'From
social
simulation
to
integrative
system
 design'
European
Physical
Journal
Special
Topics
195,
69‐100.

 
 70


Helbing,
D.
et
al.
(2011)
'Understanding,
creating,
and
managing
complex
techno‐ socio‐economic
systems:
challenges
and
perspectives'
European
Physical
Journal
 Special
Topics
195,
165‐186.

 Helbing,
D.
et
al.
(2009)
'Biologistics
and
the
struggle
for
efficiency:
concepts
and
 perspectives'
Advances
in
Complex
Systems
12(6),
533‐548.

 Humphreys,
P.
(2002)
'Mathematical
modeling
in
the
social
sciences',
in
Turner,
 S.
&
Roth,
A.
(eds.)
The
Blackwell
guide
to
the
philosophy
of
the
social
sciences.
 Malden,
Oxford
and
Berlin:
Blackwell,
166‐184.
 Kauffman,
S.
(1995)
At
home
in
the
universe.
The
search
for
the
laws
of
self­ organization
and
complexity.
New
York
and
Oxford:
Oxford
University
Press.
 Kay,
L.
(2000)
Who
wrote
the
book
of
life?
A
history
of
the
genetic
code.
Stanford:
 Stanford
University
Press.

 Keller,
E.
(2005)
'Revisiting
"scale‐free"
networks'
BioEssays
27,
1060‐1068.
 Kember,
S.
(2003)
Cyberfeminism
and
artificial
life.
London
and
New
York:
 Routledge.
 Kember,
S.
(2006)
'Doing
technoscience
as
('new')
media',
in
Curran,
J.
&
Morley,
 D.
(eds.)
Media
and
cultural
theory.
London
and
New
York:
Routledge.
 Lazer,
D.
et
al.
(2009)
'Life
in
the
network:
the
coming
age
of
computational
 social
science'
To
be
published
in
Science.
NIHPA
[Preprint].
Available
at:
 http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2745217/
(Accessed:
27
 August
2012).
 Lemke,
T.
(2011)
Biopolitics.
An
advanced
introduction.
New
York
and
London:
 New
York
University
Press.
 Lyotard,
J.
(1984)
The
postmodern
condition.
A
report
on
knowledge.
 Minneapolis:
University
of
Minnesota
Press.
 Manyika,
J.
et
al.
(2011)
'Big
data:
The
next
frontier
for
innovation,
competition,
 and
productivity'
[Online].
Available
at:
 http://www.mckinsey.com/insights/mgi/research/technology_and_innovation/ big_data_the_next_frontier_for_innovation
(Accessed:
27
August
2012).


71


Mayr,
O.
(1986)
Authority,
liberty
&
automatic
machinery
in
early
modern
Europe.
 Baltimore
and
London:
The
Johns
Hopkins
University
Press.
 Mislove,
A.
et
al.
(2010)
'Pulse
of
the
Nation:
U.S.
Mood
Throughout
the
Day
 inferred
from
Twitter'
[Online].
Available
at:

 http://www.ccs.neu.edu/home/amislove/twittermood/
(Accessed:
27
August
 2012).

 Mitchell,
M.
(2009)
Complexity.
A
guided
tour.
New
York:
Oxford
University
Press.
 Mitchell,
M.
(2006)
'Complex
systems:
network
thinking'
Artificial
Intelligence
 170,
1194‐1212.
 Muhle,
M.
(2007)
Eine
Genealogie
der
Bio­Politik.
Eine
Untersuchung
des
 Lebensbegriffs
bei
Michel
Foucault
und
Georges
Canguilhem.
PhD
thesis.
Europa‐ Universität
Viadrina
Frankfurt
(Oder)
[Online].
Available
at:
http://1.static.e‐ corpus.org/download/notice_file/849589/MuhleThese.pdf
(Accessed:
27
 August
2012).

 Myers,
N.
(2009)
'Performing
the
protein
fold',
in
Turkle,
S.
(ed.)
Simulation
and
 its
discontents.
Cambridge,
Mass.:
MIT
Press.
 Negri,
A.
&
Hardt,
M.
(2000)
Empire.
Cambridge,
Mass.
and
London:
Harvard
 University
Press.
 Prigogine,
I.
&
Stengers,
I.
(1984)
Order
out
of
chaos.
Man's
new
dialogue
with
 nature.
London:
Heinemann.

 Rabinow,
P.
(1992)
'Artificiality
and
enlightenment:
from
sociobiology
to
 biosociality',
in
Crary,
J.
&
Kwinter,
S.
(eds.)
Incorporations.
New
York:
Zone,
234‐ 252.
 Rose,
N.
(2007)
Politics
of
life
itself.
Biomedicine,
power,
and
subjectivity
in
the
 twenty­first
century.
Princeton:
Princeton
University
Press.
 Savat,
D.
(2009)
'Deleuze's
objectile:
from
discipline
to
modulation',
in
Poster,
M.
 &
Savat,
D.
(eds.)
Deleuze
and
new
technology.
Edinburgh:
Edinburgh
University
 Press.

 Shapin,
S.
&
Schaffer,
S.
(1985)
Leviathan
and
the
air­pump.
Hobbes,
Boyle,
and
 the
experimental
life.
Princeton:
Princeton
University
Press.
 
 72


Stengers,
I.
(1997)
Power
and
invention.
Minneapolis:
University
of
Minnesota
 Press.
 Vogl,
J.
(2004)
'Regierung
und
Regelkreis.
Historisches
Vorspiel',
in
Pias,
C.
(ed.)
 Cybernetics
­
Kybernetik.
The
Macy
Conferences
1946­1953.
Band
II
Essays
und
 Dokumente.
Zürich
and
Berlin:
Diaphanes,
67‐80.
 Wiener,
N.
(1989)
The
human
use
of
human
beings.
Cybernetics
and
society.
 London:
Free
Association.
 
 
 
 
 
 
 
 


73