You are on page 1of 12

Actor Network Theory

The ANT rejects dualism and conceptualizes an actor-network as a


heterogeneous network consisting of both humans and nonhumans,
thus allowing the consideration of the important technological
elements that underlie and influence economic activities.
From: Global Value Chains and Production Networks, 2019

Related terms:

Contingency Theory, Business Network, Innovation Management, Business Model,


Resource-Based View, Customer Integration, Consumer Attitude

Social Constructivism
W. Detel, in International Encyclopedia of the Social & Behavioral Sciences, 2001

3 The Actor-network Theory


The actor-network theory (Latour and Woolgar 1986, Latour 1987) is a form of
constructivism that rejects the idea of a social determination of scientific
knowledge, prominent in the Edinburgh school, mainly for the reason that the
social is barely better understood than the natural. The leading thought is that
scientific knowledge is an effect of established relations between objects, animals,
and humans engaged in scientific practices. An actor is, according to this theory,
everything that in some causal way affects the production of scientific statements
and theories: not only scientists, but also, for instance, background assumptions,
methodologies, techniques, social rules and institutions, routines, experiments,
measurements and the appropriate instruments, scientific texts and, last but not
least, external objects. For an entity to be an actor in this sense it is obviously not
required to have contentful mental states, but to be able to perform actions as a
kind of behavior describable under some intention. Thus, there can be many sorts
of relations and interactions between actors; in particular, some actors can
transform other actors (these transformations are sometimes called translations). A
network is a set of actors such that there are relations and translations between the
actors that are stable, in this way determining the place and functions of the actors
within the network. Once a network has been established it implies a sort of
closure that prevents other actors or relations from entering the network, thereby
opening the possibility of the accumulation of scientific knowledge that is taken to
be the result of translations within the network. Establishing a scientific belief,
theory, or facts comes down, from the point of view of the actor-network theory, to
placing these actors in a stable network. In this sense, scientific beliefs, knowledge,
theories, and facts are taken to be constructed by translations taking place in
established networks.
The actor-network theory shares a number of basic assumptions with social
constructivism as conceived in the Edinburgh school. Thus, both approaches
entertain a naturalistic account of scientific practices, do not presuppose a
distinction between true or successful and false or unsuccessful scientific beliefs,
and reject the possibility of a rational reconstruction of scientific practices and their
outcomes. But constructivism in the sense of the actor-network theory is social not
in the strong sense that social forces that are presupposed to exist largely
independently of scientific practices have a causal impact on these practices; but
rather in the extremely weak sense that as a result of processes taking place in
networks, a scientific claim can eventually be developed about a distinction
between the natural and the social, and consequently also about the function of the
social for scientific practices (Pickering 1992).
Social constructivism usually does not hold that, in the course of scientific
practices, scientific facts of the external world are literally constructed of some
other entities. The crucial idea of a (social) construction of scientific knowledge and
scientific facts is, rather that an analysis of the process and history of scientific
belief formation will not be able to show that the methods of science continuously
increase the probability that scientific beliefs will be good representations of an
independent external world, and should not even try. Instead, scientific belief
formation should be modeled in terms of very different factors, mainly social ones
like rules, techniques, institutions, power relations, and negotiations, which affect
scientific belief formation in a causal way that can be studied empirically and
sociologically
One of the most debated applications of social constructivism is the claim held by
many feminists that gender is socially constructed (see Comparable Worth in Gender
Studies, Feminist Political Theory and Political Science). Originally, the most
prominent feminist theories introduced a distinction between a naturalistic notion
of sex and a social notion of gender to denounce all attempts of deriving properties
of gender from properties of sex as mere constructions of gender that do not refer
to reality. More recently, in some postmodern feminist accounts of sex and gender,
it is also the very distinction between sex and gender, and thus, sex itself, that are
supposed to be socially constructed.

Read full chapter


URL: https://www.sciencedirect.com/science/article/pii/B008043076701086X

Actor Network Theory


M. Callon, in International Encyclopedia of the Social & Behavioral Sciences, 2001

2 Making up Hybrid Collectives


For ANT, society must be composed, made up, constituted, established,
maintained, and assembled. There is nothing new about this assertion, as such; it
is shared by many constructionist currents. But ANT differs from these approaches
in the role it assigns to nonhumans in the composition of society. In the traditional
view, nonhumans are obviously present, but their presence resembles that of
furniture in a bourgeois home. At best, when these nonhumans take the form of
technical artifacts, they are necessary for the daily life they facilitate; at worst, when
they are present in the form of statements referring to entities such as genes,
quarks, or black holes, they constitute elements of context, a frame for action. To
the extent that they are treated as lying outside the social collective or as
instrumentalized by it, nonhumans are in a subordinate position. Similarly, when
the topic of analysis is institutions, organization, or rules and procedures, social
analysts assume that these meta-individual realities are human creations, like the
technical artifacts that supplement daily life. The social sciences are founded on
this great divide between humans and nonhumans, this ontological asymmetry
that draws a line between the social and the nonsocial.
However, the past two decades of science and technology studies have caused this
division to be called into question. Moreover, as we have seen, in the laboratory
nonhumans act and, because they can act, they can be made to write and the
researcher can become their spokesperson. Similarly, technical artifacts can be
analyzed as devices that at some point capitalize on a multitude of actants, always
temporarily. Society is constructed out of the activities of humans and nonhumans
who remain equally active and have been translated, associated, and linked to one
another in configurations that remain temporary and evolving. Thus, the notion of
a society made of humans is replaced by that of a collective made of humans and
nonhumans (Latour 1993). This reversal has numerous consequences. We shall
stick to a single example, that of the distinction between macro and micro levels,
which has been replaced by framed and connected localities.
Does a micro level exist? The answer seems obvious. When our motorist takes to
task another motorist who refused him right of way, or when he receives a traffic
fine, he enters into interactions with other perfectly identifiable individual actors.
Generally speaking nothing else than interactions between individuals has ever
been observed. Yet it seems difficult to simply bracket off realities like institutions
or organizations that obviously shape and constrain the behavior of individual
agents, even when they are considered as the unintentional outcome of the
aggregation of numerous individual actions. To avoid this objection (and the usual
solutions that describe action as simultaneously structuring and structured), ANT
introduces the notion of locality, defined as both framed and connected.
Interactions, like those between motorists arguing with each other after an
accident, or between them and the traffic policeman who arrives on the scene, take
place in a frame that holds them. In other words, there are no interactions without
framing to contain them. The mode of framing studied by ANT extends that
analyzed by Goffman, by emphasizing the active part played by nonhumans who
prevent untimely overflowing. The motorists and traffic officers are assisted, in
developing their argument about how the accident occurred, by the nonhumans
surrounding them. Without the presence of the intersection, the traffic lights that
were not respected, the traffic rules that prohibit certain behaviors, the solid lines
that ‘materialize’ lanes, and without the vehicles themselves that prescribe and
authorize certain activities, the interaction would be impossible, for the actors
could give no meaning to the event and, above all, could not properly circumscribe
and qualify the incident itself.
This framing which constrains interactions by avoiding overflowing is also
simultaneously a connecting device. It defines a place (that of the interaction) and
at the same time connects it to other places (where similar or dissimilar accidents
have taken place, where the policemen go to write up reports, or where these
reports land up, etc.). All the elements that participate in the interaction and frame
it establish such connections for themselves. The motorist could, for example,
invoke a manufacturing defect, the negligence of a maintenance mechanic, a
problem with the traffic signals, the bad state of the road, the traffic officer's lack of
training, etc. Suddenly the circle of actants concerned has become substantially
bigger. Through the activities of the traffic officer, the automobile, and the
infrastructure which all together frame interactions and their implications, other
localities are associated with those of the accident: the multiple sites in which
automobile manufacturers, the networks of garage owners, road maintenance
services, and police training schools act. Instead of microstructures, there are now
locally framed interactions; instead of macrostructures, there are connected
localities, because framing is also connecting.
With this approach it is possible to avoid the burdensome hypothesis of different
levels, while explaining the creation of asymmetries, i.e., of power relations,
between localities. The more a place is connected to other places through science
and technology, the greater its capacity for mobilization. The translation centers
where inscriptions and statements converge provide access to a large number of
distant and heterogeneous entities. The technical artifacts present in these far-off
places ensure the distant delegation of the action decided in the translation center.
On the basis of the reports and results of experiments it receives, a government
can, for example, decide to limit CO2 emissions from cars to a certain level. As a
translation center it is in a position to establish this connection between the
functioning of engines and the state of pollution or global warming. It sees entities
and relations that no one else can see or assemble. But the application of this
decision implies, among other things, the setting up of pollution-control centers
and the mobilization of traffic officers to check that the tests have been performed,
and if necessary to fine motorists, on the basis of a law passed by parliament. Thus,
the action decided by the translation center mobilizes a large number of human
and nonhuman entities who actively participate in this collective and distributed
action. Just as the motorist sets in motion a whole sociotechnical network by
turning the ignition key, so the minister for the environment sets in motion an
elaborately constructed and adjusted network by deciding to fight pollution. The
fact that a single place can have access to other places and act on them, that it can
be a translation center and a center for distant action—in short, that it is able to
sum up entire sociotechnical networks—explains the asymmetry that the
distinction between different levels was supposed to account for.

Read full chapter


URL: https://www.sciencedirect.com/science/article/pii/B0080430767031685

Foreword
Associate ProfessorMichele Willson, in Information Cosmopolitics, 2015

The book employs ANT to sketch an alternative projection for the study of
nationalism and cosmopolitanism. The author claims that the positioning of
theoretical approaches to nationalism and cosmopolitanism research within a
continuum between particularism and universalism understandings generates a
gap between theory and practice as it forces researchers to adopt a prior position
before any empirical study is undertaken. He argues that these approaches
understand a nation or a cosmos as a union of its members, whereas ANT more
usefully conceptualises it as an intersection. The ‘union’ projection presents any
social group as ‘something that holds us together’; the ANT projection sees it as
‘something that is held together’. Consequently, in the union projection, nation or
cosmos is seen as a stable entity despite frequent replacements of its parts as the
unity and durability is provided by the projection (union) itself. In contrast, the
proposed alternative projection illuminates the hard work that numerous and
heterogenous actors perform to maintain unity and identity. It allows us to see why
a nation or a cosmos has to be constantly reinvented in order to maintain its
identity as the imbrication of events, actions and individuals (or more accurately, to
use ANT terminology, actants) forces the intersection to change its shape and size.
The author argues that we should focus on these processes of reinvention as they
illuminate the means of construction and reproduction of nation and cosmos (and
in the process, reveal fragile connections that provide an empirical traceability
between individual actions and the construction sites of nationalism and
cosmopolitanism).

Read full chapter


URL: https://www.sciencedirect.com/science/article/pii/B9780081001219000139

Overview of the Research into GPNs


Cui Fengru, Liu Guitang, in Global Value Chains and Production Networks, 2019

1.2.3.3 Limitations of Networks, Embeddedness, and Actor-Network


Theory
While the ANT offers an interesting methodology that has been adopted already for
the study of globalization and production networks, its contribution to the analysis
of economic development is constrained by the fact that it lacks an appreciation of
the structural preconditions and power relations that inevitably shape production
networks (Dicken et al., 2001). Actors are theorized in the GPN framework devised
by Dicken and coworkers not as individual agents per se, but as a constitutive part
of the wider network through which emergent power and effects are realized over
space (Hess and Yeung, 2006).

Read full chapter


URL: https://www.sciencedirect.com/science/article/pii/B9780128148471000014

Pedagogies and teaching methods


Susan Myburgh, Anna Maria Tammaro, in Exploring Education for Digital
Librarians, 2013

Social constructivism
In the fields of sociology of science and Science and Technology Studies (STS),
social constructivism has been widely used, supporting the ideas of Social
Construction of Technology (SCOT) and Actor-Network Theory (ANT). As a learning
theory, constructivism is based on the idea that cognitive (or mental) activity
constructs knowledge by making meaning, mediated by language (this is also clear
in Vygotsky’s work). This interaction between experience and ideas creates
knowledge, through discovery and exploration of problems and confronting them.
Constructivism means that human beings do not find or discover knowledge so
much as construct or make it. We invent concepts, models and schemes to make
sense of experience and further, we continually test and modify these constructions
in the light of new experience. (Schwandt, 1994, pp. 125–6)
From the constructivist position, knowledge is constructed by humans, validated by
use in society, and so maintained by social institutions. There are weak and strong
versions of constructivism: in the weak version, human representations of reality or
concepts are social constructs: if representations or conceptions of an entity or
phenomenon are socially constructed, they can thereafter act upon the entities. In
strong social constructivism, not only are the representations of concepts socially
constructed, but the entities themselves are as well. As Latour and Woolgar (1986)
discuss in Laboratory Life, chemical substances, for example, are only recognised as
such because of the social knowledge system which conceives them to be so. The
work of Latour in particular suggests that knowledge is, in fact, generated by its
social process of consensus-building within communities, much like Kuhn
(1962/1970).
Constructivism recognises discourses and sign-systems operating not only upon
the objects of a given knowledge structure – such as a discipline or profession –
but also upon its human subjects: its professionals. So, this learning theory defies
the hegemony of grand narratives, and questions the authority of the natural
sciences as the only way in which to create knowledge. Instead, the traditional
macro-structures of disciplines break up in the face of contingent and socially
negotiated knowledge creation. Smaller groups of collaborative individuals create
microstructures of meaning, as all knowledge is socially and culturally constructed,
so what an individual learns depends on what the learning leader (or teacher)
provides. Interaction with ‘experts’ (those who ‘possess’ knowledge) remains
essential, but the nature of the interaction differs significantly. Because of the wide
array of digital resources (sometimes considered to be surrogate ‘experts’), and the
use of social media, the generation of ideas and knowledge is not controlled or
stable: it is constantly open to modification and interpretation (Breu and
Hemingway, 2002), and becomes ‘the wisdom of the crowds’ (Surowiecki, 2004).
‘Crowd-sourcing’ has become the new method of information retrieval as collective
intelligence is understood to be superior to that of the individual. Participating in
the identification, creation and sharing of ideas – and experiencing these processes
– becomes more important than ‘consuming’ or absorbing it.
Teaching in the constructivist mode is collaborative, and ICTs facilitate and
encourage this, so that collaboration can extend beyond the individual and his/her
interaction with information resources and ideas, to others in the learning
community. Constructivism shapes teaching and learning as:
■ a constant activity;
■ a search for meaning;
■ understanding the whole as well as parts;
■ understanding mental models of students and other knowledge creators
(suggesting customised curricula);
■ assessment as part of the learning process;
■ learning undertaken collaboratively and through conversations.
One of the responsibilities of the teacher is to recognise the individuality of each
student, so that what Vygotsky (1978) describes as the ‘zone of proximal
development’ can be achieved. This is the area in which the student is challenged
but not overwhelmed and can remain unthreatened and yet learn something new
from the experience. Vygotsky also articulated the notion of ‘scaffolding’, meaning
that teaching must start with what the student already knows and building a
further framework which will support further knowledge, and typically this involves
proceeding from the concrete to the abstract. This metaphor ties in nicely with
constructivism.

Read full chapter


URL: https://www.sciencedirect.com/science/article/pii/B9781843346593500104

Science, Sociology of
T.F. Gieryn, in International Encyclopedia of the Social & Behavioral Sciences, 2001

3.4 Actor-networks and Social Worlds


After sociologists worked to show how science is a thoroughly social thing, Bruno
Latour (1988) and Michel Callon (1986) then retrieve and reinsert the material:
science is not only about facts, theories, interests, rhetoric, and power but also
about nature and machines. Scientists accomplish facts and theories by building
‘heterogenous networks’ consisting of experimental devices, research materials,
images and descriptive statistics, abstract concepts and theories, the findings of
other scientists, persuasive texts—and, importantly, none of these are reducible to
any one of them, nor to social interests. Things, machines, humans, and interests
are, in the practices of scientists, unendingly interdefined in and through these
networks. They take on meanings via their linkages to other ‘actants’ (a semiotic
term for anything that has ‘force’ or consequence, regardless of substance or form).
In reporting their results, scientists buttress claims by connecting them to as many
different actants as they can, in hopes of defending the putative fact or theory
against the assault of real or potential dissenters. From this perspective, length
makes strength, that is, the more allies enrolled and aligned into a network—
especially if that network is then stabilized or ‘black boxed’—the less likely it is that
dissenters will succeed in disentangling the actants and thereby weaken or kill the
claim. Importantly for this sociology of science, the human and the social are
decentered, in an ontology that also ascribes agency to objects of nature or
experimental apparatuses.
Actor-network theory moved the sociological study of science back outside the
laboratory and professional journal—or, rather, reframed the very idea of inside
and outside. Scientists and their allies ‘change the world’ in the course of making
secure their claims about nature, and in the same manner. In Latourian vernacular,
not only are other scientists, bits of nature or empirical data enlisted and
regimented, but also political bodies, protest movements, the media, laws and hoi
polloi. When Louis Pasteur transformed French society by linking together
microbes, anthrax, microscopes, laboratories, sick livestock, angry farmers, nervous
Parisian milk-drinkers, public health officials, lawmakers, and journalists into what
becomes a ‘momentous discovery,’ the boundary between science and the rest of
society is impossible to locate. Scientists are able to work autonomously at their
benches precisely because so many others outside the lab are also ‘doing science,’
providing the life support (money, epistemic acquiescence) on which science
depends.
The boundaries of science also emerge as theoretically interesting in related
studies derived from the brand of symbolic interactionism developed by Everett
Hughes, Herbert Blumer, Anselm Strauss, and Howard Becker (and extended into
research on science by Adele Clarke 1990, Joan Fujimura, and Susan Leigh Star).
On this score, science is work—and, instructively, not unlike work of any other
kind. Scientists (like plumbers) pursue doable problems, where ‘doability’ involves
the articulation of tasks across various levels of work organization: the experiment
(disciplining research subjects), the laboratory (dividing labor among lab
technicians, grad students, and postdocs), and ‘social worlds’ (the wider discipline,
funding agencies, or maybe animal-rights activists). Scientific problems become
increasingly doable if ‘boundary objects’ allow for cooperative intersections of those
working on discrete projects in different social worlds. For example, success in
building California's Museum of Vertebrate Zoology in the early twentieth century
depended upon the standardization of collection and preparation practices (here,
the specimens themselves become boundary objects) that enabled biologists to
align their work with trappers, farmers, and amateur naturalists in different social
worlds. As in actor-network theory, sociologists working in the ‘social worlds’
tradition make no assumption about where science leaves off and the rest of
society begins—those boundaries get settled only provisionally, and remain open
to challenge from those inside and out.

Read full chapter


URL: https://www.sciencedirect.com/science/article/pii/B0080430767031533

A proposed philosophico-ethical approach towards the


electronic information era
Carel Stephanus de Beer, in Information Science as an Interscience, 2015

9.4.3 Acritical philosophy: a philosophy of invention


Invention is only possible within this context of links, networks and interactions. All
that can be achieved otherwise is innovation, which is not the same. The one is
closed, the other open.
But what is ‘a philosophy of information?’ Is there room for, or a place for, such an
endeavour, similar to the generally accepted ideas of a philosophy of history, of
religion, of law, of language, of politics, and others? And who must work it out and
is it an activity which should be taken seriously and why?
These perspectives can fruitfully be developed in terms of Serres’ thinking (and
according to his approach to reading as a journey).
Why this focus on Michel Serres? There are various reasons: the influential
character of his works and thinking, widely translated and discussed, and the
inspiration behind the actor-network theory of a number of French and British
sociologists; the uniqueness of his views on information, especially his unique
paradigmatic claims (an acritical philosophy of information); the key position in
which information is put in the field of knowledge and information work; the
special interdisciplinary position taken by him in his thinking about information,
education, the sciences and literature. All these insights have a very specific bearing
on our theme and on our subject field.
The way in which he developed his views on communication shows a very heavy
focus on information. The central place allocated to information in this and the
subsequent titles of the Hermès series and other publications justifies in the very
first place the notion or idea of a ‘philosophy of information’ rather than that of
communication, albeit firmly the case that the intimate relationship between these
two terms should be carefully explored. What should also be noted is that we
encounter in this thinker a very unique and special, and for me exciting and useful,
interpretation of these terms, the fecundity and fruitfulness of which we should not
lose sight. As a matter of fact this fecundity is precisely the issue that forms the
core and focus of this study.(It is only when we move beyond the critique of the
‘critique’ tradition that invention becomes a possibility.)

Read full chapter


URL: https://www.sciencedirect.com/science/article/pii/B9780081001400000092

Trends in Workplace Learning Research


T. Fenwick, in International Encyclopedia of Education (Third Edition), 2010
Co-Participation or Co-Emergence Model
In this orientation, individual and social processes are viewed as unique but
enmeshed, and deserve examination at micro and macro levels of analysis.
Learning is knowledge creation through social participation in everyday work. The
conception is of mutual interaction and modification between individual actors,
their histories, motivations and perspectives, and the collective (including social
structures, cultural norms and histories, and other actors). Radical versions expand
the collective to include environmental architecture, discourses, and objects, as in
actor–network theory where knowledge circulates and is translated in each
interaction of one agent mobilizing another. Cultural–historical activity theory
views the individual and organization in dialectical relationship, where learning is
occasioned by questioning practices or contradictions of the system, and is
distributed among system elements: perspectives, activities, artifacts, affected by all
contributors and clients. Complexity theory treats learning as inventive/adaptive
activity produced continuously through action and relations of complex systems,
occasioned in particular through disturbance. Most agree that learning is
prompted by particular individuals (guides or mentors), events (conflict or
disturbance), leaders (e.g., encouraging inquiry, supporting improvization), or
conditions (learning architecture).
Long popular in Nordic research of workplace learning but just recently emerged
in North American research is cultural–historical activity theory (CHAT). Here,
learning is viewed as change in a community’s joint action. The community’s
activity is shaped by its rules and cultural norms, division of labor and power, and
mediating artifacts (language, tools, and technologies) that it uses to pursue the
object – a problem at which activity is directed. Learning occurs as the collective
construction and resolution of tensions or contradictions occurring within this
activity system. Unlike other practice-based systemic perspectives of workplace
learning, CHAT retains its Marxist influences in its recognition of the inherent
contradictions in capitalist work systems based on labor exchange, and in its
analysis of the historical emergence of particular practices and ideologies (see
Chaiklin et al., 2003).
In organizational studies and increasingly in educational study in Canada,
complexity theory is also gaining acceptance as a useful way to understand how
activity, knowledge, and communities emerge together in the process of workplace
learning. Individual interactions and meanings form part of the workplace context
itself: they are interconnected systems nested within the larger systems in which
they act. As workers are influenced by symbols and actions in which they
participate, they adapt and learn. As they do so, their behaviors, and thus their
effects upon the systems connected with them, change. The focus is not on the
components of experience (which other perspectives might describe in fragmented
terms: person, experience, tools, and activity) but on the relationships binding
them together. Workplace learning is thus cast as continuous invention and
exploration in complex systems.
Critics suggest that such practice-based studies of workplace learning bypass
questions of politics and power relations: who is excluded from the construction of
knowledge in a CoP, what dysfunctional or exploitative practices are perpetuated in
communities of practice, and what hierarchical relations in the workplace
reproduce processes of privilege and prejudice. Issues raised include accreditation
and assessment of learning when it’s buried in co-participation, distinguishing
desirable from undesirable knowledge development, accounting for changing
notions of what is useful knowledge, and differentiating influences of particular
groups in the co-participational flux (positional, generational, gendered, etc). At
issue is the extent to which sociocultural learning theories including notions of
communities of practice, complex adaptive systems, or even CHAT suppress or
enable core questions about the politics and purposes of workplace learning.

Read full chapter


URL: https://www.sciencedirect.com/science/article/pii/B9780080448947000129
Mapping virtual worlds
Woody Evans, in Information Dynamics in Virtual Worlds, 2011

Transhumanism
What it is: Transhumanism has strong descent lines from Queer Theory, ANT,
libertarianism, and humanism. The main thrust of its concerns is that humans and
the technology humans have built are headed toward a convergence of some kind.
Many transhumanists hold that free people should have the right to use technology
to alter their bodies, minds, and environments in whatever way they see fit (so
long, often goes the caveat, as it does not impinge on any other person’s rights).
Transhumanism, then, may be seen as much as a political position as it is a
philosophical one.
Why it matters here: Virtual worlds offer ways to model transhuman and posthuman
projects, including social systems that may come after The Singularity. New
convergence technologies, such as refined voice recognition software, gestural
interfaces, and brain-wave control devices, have been developed specifically as tools
for streamlining our psychosomatic experiences inworld. The mutability suggested
by Queer Theory and Actor-Network Theory is tied together and magnified by
Transhumanism. Virtual worlds provide fictional settings to test transhuman
scenarios; from personal and social reactions to grave body modification, to the
dehumanizing effects (or not) of machine integration into flesh, virtual worlds can
be sandboxes for proto-posthumans. The idea that virtual worlds may be the
breeding grounds for a libertarian- tilted techno-social revolution should be of
great interest to most cultural observers.
Nubuyoshi Terashima takes concerns about mediation and hyperreality fit squarely
into transhumanist turf when he ties the blending of simulations with Real
experience and the conflation of artificial intelligences with human intelligences in
such blended environments (see Hyperreality, 2001). Terashima’s work bridges the
work of Baudrillard and transhumanism, because it lays out a space beyond mere
‘virtual reality’, which is more- or-less a simple simulation, and opens the way for a
genuinely alternative experience of reality which relies on human co-evolution with
technology. As John Tiffin, working with Terashima, puts it: ‘A HyperWorld is not
only where what is real and what is virtual interact, it is where human intelligence
meets artificial intelligence.’ (2001: 33). This is a clear overlap between media
theory and transhumanist concerns.
Transhumanism is pointedly focused on how media (and everything else) changes
on the way toward and beyond the techno-social singularity. The ‘singularity’ is the
point at which human intelligence is bound with super-human artificial
intelligences. Though the definition of singularity is described in various ways, this
is an over-riding and common theme: transformation of human culture into
superhuman status by way of incorporating our most advanced tools into our own
(now purposeful) evolution.
It will be clear that all of these theories are important online, and indeed some have
been used to make sense of hypertext or the World Wide Web for some years now
(see the ties between hypertext and intertextuality in Landow’s work (1994, 2006),
for example, or read the general concern in ANT for social networks between
people and machines). These theoretical frameworks help us make sense of the
specific concerns of each virtual world we’ll explore; but these few broad theories
won’t be our only tools – when appropriate, other modes of understanding virtual
worlds will be brought in as well (I hear somebody out there holler: Why’s he not
mentioned Jurgen Habermas? I’ve hung me whole career on Habermas . . . Show a little
love, mate!), so don’t worry just yet. It’s not like the concerns of the Marxists or
Feminists, or about environment or ethnicity have been shut out. Globalization and
Situationism incorporate elements of political and economic critiques. Queer
Theory, to some extent, has metabolized and obviated Feminism. Ethnicity,
language, and citizenship are all concerns of globalization studies, and their
impacts online (or the impact that being online has on language, etc.) is, by now,
not new territory. And, again, the sketches above do not exhaust the theories that
could or should be used to understand virtual worlds: they simply act as the
mainmost tools for the book in hand.

Read full chapter


URL: https://www.sciencedirect.com/science/article/pii/B9781843346418500011

Themed issue: Spatialities of Ageing


Neil M. Coe, ... Mark Whitehead, in Geoforum, 2012

3 The matter of air pollution (Samuel Randalls)


Objects upon objects, lists upon lists, instruments upon instruments: air pollution
monitoring, we learn, involves a large catalogue of things that come to represent,
measure, symbolise and enact air pollution. Whether this is the grimy half-cleaned
painting or the various types of deposit gauges, air pollution is something to be
directly observed, monitored and, where possible, managed. Constructing an
orderly system is, however, prone to acts of whimsy by governmental or scientific
authorities, indiscipline by observers or the simple banality of everyday events. In
the latter category come the glass-smashing cat and the London County Council
member driving away with the pollution monitoring equipment temporarily
suspended on the car roof before it crashes to an inevitable shattering landing.
Whitehead’s State, Science and the Skies invites readers to consider the individuality
and materiality invoked in air pollution science and policy. It is the scientific
instrument, the individual measurement, the tacit ability to identify different
shades of brown, and the smoke abatement exhibitions that become important
actors in this account. At the same time these actors are tied together into a story
of atmospheric governmentality in which atmospheres, sensitive bodies, states and
pollution observers learn the art of conducting themselves through these
governmental apparatuses. It is, as might be inferred from the preceding
comments, the entwining of governmentality and actor-network theory that is at
the core of the theoretical and methodological accomplishment of the book.
Whitehead draws the literatures together, moving beyond environmental
governmentality accounts that inscribe power to an over-arching managerialist
force in which actors remain passive and at the same time retaining a notion of
subjects that is far more Foucauldian than actor-network theorists may be
comfortable with. Methodologically the alliance works well to produce an
enlightened micro- and macro-historical account of air pollution monitoring in
late 19th and 20th century Britain. Conceptually, the book raises some interesting
puzzles and it is to these that I now turn.
In reading State, Science and the Skies it was not only actor-network theory that
came to mind, but rather a different tradition in science and technology studies
scholarship namely the literature about co-production (for example Jasanoff, 2004).
While co-productionist accounts primarily operate at an interpretative level and
have a rather more tenuous grasp on material agency than actor-network theorists,
the ideas of co-producing science and society, and knowledge and order, are ever-
present in Whitehead’s book. His analysis concurs with co-productionist accounts
in its care for political questions. To give one example, he shows how current air
pollution governance is focused on defining and securing ecological thresholds
rather than conceiving of societal or environmental protection in a more holistic
way. Indeed it is up to sensitive individuals to limit exposure, not the government
to prevent exposure. Who gets what, how and why matter. What is ordered in one
way can be re-ordered in a different way. It is, as co-productionists might point
out, the moral legitimacy of air pollution governance that matters alongside claims
for better or simply more science. Co-productionism seems a more comfortable
ally for governmentality theorists than the actor-network debates. Given this, one
must think that the material really matters in and for Whitehead’s account. It does,
but how?
The intriguing puzzle emerging from the book then is the question of what exactly
is being measured, monitored and managed. Whitehead’s account suggests that
air pollution resists being easily subsumed into governmental apparatuses. The
question is what exactly is it that resists? Air pollution is not constant as Whitehead
points out at the start of the book (Whitehead 2009, p. 2). This implies that air
pollution is constantly being re-enacted. The forms, paintings, observers, and
gauges are not measuring one thing – air pollution – but rather a multiplicity of
things that coalesce at certain times and in certain places under this discursive
category. At times, the book seems to slip into a story of air pollution
governmentality that treats the thing ‘air pollution’ as rather more stable than the
empirical material might suggest. To adopt the language of Mol (2002), air
pollutions seem to be multiply constituted in practice. They are contingent on the
pollutant being realised within the instrument or body. In other words it only
becomes pollution when it has an effect.
This leads into an important question about the temporalities of air pollution. To
what extent do these temporalities inform government actions and the emergence
of atmosphere-sensitive subjects in particular ways that might be distinct from
each other and from broader debates, for example, about global climatic change
(Demeritt, 2006, highlights the political and technical co-constitution of climate
knowledge)? Smoke has a materially disruptive immediacy compared to invisible
gases sensed by suitably trained instruments and bodies, but disguised to the eye.
Like extreme weather events, air pollution events punctuate life. They may also be
of relatively short timeframe as emphasised by government warnings to avoid
strenuous activity when air pollution levels are high (implying that they will be able
to return soon). Do these material temporalities of air pollution events in
contemporary Britain enable forms of governance that charge individuals with self-
control in ways that differ from earlier smogs? Showing how these materialities
come to inform and shape governance is important to prove this argument.
One might draw parallels to broader debates about weather and climate; indeed
this comparison is pointed to at the end of the book with some careful caveats. The
readily available information on air pollution is in marked contrast to the
difficulties of accessing other forms of atmospheric data. Equally important are the
relations between these atmospheres and the atmospheric subjects. Whitehead
notes that air pollution as policy topic lends passivity to government action, partly
at least as a result of its distinct temporalities that divert attention from national
and international regulations to the sensitive (different from ‘normal’) individual
affected. Likewise when extreme weather events occur, individuals are reminded to
listen to forecasts and be prepared. In climate change adaptation, on the other
hand, the islanders noticing the sea levels rising have and are likely to prompt
governments to respond in an active way to demand cuts to emissions or
reparations, because here there is no easy way to wait until the next day to engage
in strenuous activity. The type of atmospheric threat invoked in governance debates
thus has a significant influence on the policy mechanisms applied, the subjects
created and the apportioning of responsibility when things go wrong. Again
material realities, in all their multiplicity, matter when it comes to atmospheric
governance.
An inferred criticism arising from this line of thought might be that in Whitehead’s
account the social ordering of air pollution is primary and indeed governmentality
arguably forms the primary conceptual foundation in the book. What is remarkable
when reading about the diverse air pollution experiments and interventions is the
continuity of the who, what and why questions. Power indeed is continually being
reproduced but through enduring connections. Is this because the material world
simply responds to the changes in governance as might be implied by the kind of
passive governmentality reading I mentioned earlier? No, because things resist in
Whitehead’s account. It is rather in the forms of conduct of conduct, differently
manifested for sure, but which tie together clean air exhibitions and digital records.
This inspiration sits rather uncomfortably with more voluntaristic accounts or, for
example, Latour’s (2004) Parliament of Things. It is, in Whitehead’s account, vital to
critically engage with the scientists, politicians and economists without entrusting
expertise solely to the actors involved. The normative underpinning derived from
the Foucauldian reading is not about saying what is right or wrong, but opening up
questions about how governance is shaped, by whom and why, that enable us to
appreciate how it might be re-done differently in the future.
The book concludes with a clear statement that ‘we need more, not less, air
sciences to support more, not less, atmospheric government’ (Whitehead, 2009, p.
234). The devil is in the details of course. What counts as science or ‘good
governance’? If we are continuously involved in experimentations with our
climates, it is thinking through which kinds of atmospheric governance enact what
kinds of goods (climatic or social) that becomes the important political question. It
is never finally resolvable. Whitehead’s historical analysis points to the contingency
of the development of air pollution governance in the UK, which suggests there is
unlikely to be anything less than contingent, surprising, contradictory futures too.
With this book in hand, we are much better equipped to understand the natures,
subjects and instruments acting, enacting and re-enacting air pollution, and
equally for understanding, enhancing and developing new modes of living with(in)
the atmosphere.

Read full article


URL: https://www.sciencedirect.com/science/article/pii/S0016718512001509

Recommended publications

Research Policy
Journal

Critical Perspectives on Accounting


Journal

Management Accounting Research


Journal

Global Value Chains and Production Networks


Book • 2019

Copyright © 2022 Elsevier B.V. or its licensors or contributors.

ScienceDirect ® is a registered trademark of Elsevier B.V.

You might also like