Professional Documents
Culture Documents
Edge of Control
Author:
Wright, Rewa
Publication Date:
2018
DOI:
https://doi.org/10.26190/unsworks/21084
License:
https://creativecommons.org/licenses/by-nc-nd/3.0/au/
Link to license to see what you are allowed to do with this resource.
software assemblages
Rewa Wright
Doctor of Philosophy
October 2018
Thesis/Dissertation Sheet
The software assemblages produced in this research, draw upon Gilles Deleuze and Felix Guattari’s machinic
assemblage, a relational ecology of material elements organized by movement, as well as Karen Barad’s concept of
agential realism, where nonhuman matter enacts situated modes of agency. Thinking with Donna Haraway, the software
assemblage takes a diffractive approach, exploring patterns of interference in MR spaces. An analysis of selected media
art practices operates in tandem with this trajectory, investigating influential work by Golan Levin and collaborators,
OpenEndedGroup, Yvonne Rainer, Miya Masaoka, Adam Nash and Stefan Greuter, as well as Christa Sommerer and
Laurent Mignonneau.
Developing a re-figured version of MR, augments become performative as they co-emerge with my body, in media
environments that assemble living plants, hardware devices, and computational networks. Augments will be
apprehended not only as screen objects, but also as a mode of materiality. Emerging from this research, are techniques
and methods that investigate: the performative potential of augments outside of the informatic; the Leap Motion gestural
controller as a performative interface; the generation of augmented audio from the bio-electrical signals of plants; and,
the extended senses of embodiment that embroil the performer. Here, signals, augments, and bodies are manifest as
relational forces that diffract and modulate through the software assemblage. An alternative MR emerges that ripples
through physical as well as digital space. And that's when augments exceed the informatic.
I hereby grant to the University of New South Wales or its agents the right to archive and to make available my thesis or dissertation in whole or in part
in the University libraries in all forms of media, now or here after known, subject to the provisions of the Copyright Act 1968. I retain all property rights,
such as patent rights. I also retain the right to use in future works (such as articles or books) all or part of this thesis or dissertation.
I also authorise University Microfilms to use the 350 word abstract of my thesis in Dissertation Abstracts International (this is applicable to doctoral
theses only).
2
ORIGINALITY STATEMENT
‘I hereby declare that this submission is my own work and to the best of my
knowledge it contains no materials previously published or written by another
person, or substantial proportions of material which have been accepted for the
award of any other degree or diploma at UNSW or any other educational
institution, except where due acknowledgement is made in the thesis. Any
contribution made to the research by others, with whom I have worked at
UNSW or elsewhere, is explicitly acknowledged in the thesis. I also declare that
the intellectual content of this thesis is the product of my own work, except to
the extent that assistance from others in the project's design and conception or
in style, presentation and linguistic expression is acknowledged.’
Signed ……………………………………………..............
Date ……………………………………………..............
3
Small sections from the following publications may be included in this Dissertation:
Wright, Rewa. 2014. “From the bleeding edge of the network: Augmented reality and
the software assemblage.” Pp 185-193. In Post Screen: Device, Medium and
Concept, edited by Helena Ferreira and Ana Vicente. Lisbon, Portugal:
CIEBA-FBAUL.
Wright, Rewa. 2015. “Mobile augmented reality art and the politics of re-assembly.”
In Proceedings of the 21st International Symposium on Electronic Art.
Vancouver, B.C: ISEA International.
Wright, Rewa. 2018a. “Post-human Narrativity and Expressive Sites: Mobile ARt as
Software Assemblage.” Pp. 357-369. In Augmented Reality Art, edited by
Valdimir Geroimenko. Switzerland: Springer International Publishing.
Wright, Rewa. 2018b. "Interface Is the Place: Augmented Reality and the
Phenomena of Smartphone–Spacetime". Pp. 117-125. In Mobile Story Making
in an Age of Smartphones, edited by Max Schleser and Marsha Berry.
Switzerland: Palgrave Pivot.
i
Table of Contents
Acknowledgments .................................................................................................................. iv
Introduction ........................................................................................................................... 1
What is software? 5
List of Figures
Editor’s note: In the public version of this thesis, due to copyright issues,
Figures 1-16 are unavailable. Figure numbers have been retained for
consistency in the remainder of the thesis.
Fig. 3. Screenshot from the HP Reveal app. taken by the artist. Image
restricted. Fig. 4. Giant hands projected across bodies in the sand. Image
restricted.
Fig. 8. Screen shots depicting the Leap Motion ‘Blocks’ example. Image
restricted.
Fig. 10 and 11. Leap Motion gestural controller as hand held, two
examples. Image: the artist.
Figs. 13-15. Stills from Hand Movie (1966) showing Rainer’s micro-
gestures. These images are screen captures from an online video copy of
the film, retrieved from https://coub.com/view/80y37 (accessed 20
December 2016). Image restricted.
vi
Fig. 16. Tactile Light. Micro-gestures diffract across plants. Image: Simon
Howden.
Fig. 17. Hand avatars blend with environment. Image: the artist
Fig. 18. Environment view. Hand avatars blend with wheatgrass
structures. Image: the artist. Fig. 19. Hand avatar projected to wheatgrass.
Image: the artist.
Fig. 20. Hand avatars tend to abstraction. Image: the artist.
Fig. 21. Grass lattice alongside my performative gestures. Image: the
artist.
Fig. 22. Sitting on the wheatgrass sheet, activating piezo sensor and Leap
Motion in tandem. Image: Simon Howden.
Figs. 23 and 24. Swishing the grass. Movement left to right. Image: Simon
Howden. Fig. 25. MR screen capture from Unity of hand avatars. Image:
the artist.
Fig. 26. On the LCD display, the chopped hand in stark relief. Image: the
artist.
Fig. 27. Red dot shows the Wild Versions location. Image: Google Maps.
Fig. 28. A location shot at A.H Reed Memorial Park, Whangarei. Image:
the artist. Fig. 29. Still image from Wild Version 1. Image: the artist.
Fig. 30. Screen image from Wild Version 2. Image: the artist.
Fig. 31 and 32. Sequential screen images from Wild Version 2. Image: the
artist. Fig. 33. Screen capture from the Wild Versions 4. Image: the artist.
Fig. 34. Screen capture from the Wild Versions 4, showing light
diffractions. Image: the artist. Fig. 35. Sketch of my homemade mobile
system: webcam, laptop, iPhone 8 plus (the 'frame')
Fig. 36. Screen capture from Leap Motion/Vive display. Image: the artist.
Figs. 38 and 39. Signal path diagrams for Yucca Relay and Agave Relay.
Images: artist’s workbook.
Fig. 43. Yucca Tree with white electrodes from MIDI Sprout at bottom
vii
Fig. 48. Technique 1. MIDI. Image: the artist. Fig.49. Technique 2. Image:
Simon Howden Fig. 50. Technique 2. MIDI. Image: the artist. Fig. 51.
Technique 3. Image: Simon Howden. Fig. 52. Technique 3. MIDI. Image:
the artist.
Fig. 55. Screen capture from HMD with my hand under the avatar. Image:
the artist.
Fig. 56. Agave Relay, screen capture from HMD, with contorted avatar.
Image: the artist.
Fig. 57. Green Wall Panel on my front porch, March 2018. Image: the
artist
Fig. 59. Contact Zone, ‘agave modulation’ segment. Image: the artist.
Fig. 60. Contact Zone, ‘agave modulation’ segment, HMD view, infrared
signal captures my hand
Fig. 64. Contact Zone. HMD screen capture from Leap Motion/Vive
apparatus. Temporally synchronous with Figs. 62 and 63. Image: the
artist.
1
INTRODUCTION
In a widely accepted definition of augments, Ronald Azuma argues that they are data
objects interactive in real time, and that they register across three dimensions
(1997:355). Augments can either register as a data overlay on a ‘virtual’ world or on
the ‘real’, and this changes the particular category of mixed reality (MR) they belong
to: in augmented reality (AR) augments rest upon a ‘real’ world; in the converse case,
augmented virtuality (AV), ‘real’ content is inserted in a virtual environment (Milgram
and Kishino 1994:1321). However, through commercial and industrial practice, the
notion of the ‘augment’ has also become fused to the digital and informatic. In a widely
accepted definition that has migrated to media art practice, augments are often
considered as informatic overlays, content ‘overlaid upon a visual representation of
the physical’ (Lichty 2014:99). An underlying assumption is that augments are always
digital. In narrow technical terms, this could be considered correct, since a
screen/surface is the mechanism by which the digital is displayed to a user/participant
in most types of MR experience. Yet, imbricated with a natural environment, such as
the location of my artwork, recourse to the digital as source offers only a partial
explanation. There, the environmental space came into play as a relational force,
colouring not only my experience of the digital, but also its capacity to materially act
through a technical network. Instead of recourse to the existing approaches and
2
Through the research that has constituted this doctoral program, I have come to
understand digital augments as performative: they are not discretely formed prior to
interaction with a computational network. Instead, they emerge with the networked
assemblage, as a matter of contact between all the elements in that system, whatever
it might be. Elaborating on relational thinking in this dissertation, augments will be
re-conceived as a mode of materiality rather than as purely digital objects. It is my
speculation that such an approach will allow a more affective set of relations to develop
for MR. But before I trace what such a material approach to augments might do, I will
broadly sketch the diverse and innovative artistic field of MR.
When I began augmenting real world environments at the end of 2012, in work that
preceded this doctoral research, MR was already a burgeoning area for arts practice
and research. Diverse pieces included pioneering forays such as Myron Kreuger’s
Videoplace (1975-1989)2, to playable media that blurs physical and digital space such
as Can you see me now? (2001)3 by Blast Theory, to live performances where
augments and hand gestures unfold in real-time like Tmema’s Manual Input Sessions
(2004)4 by Golan Levin and Zac Lieberman), to propositions in online environments
like Becoming Dragon (2008)5 by Micha Cárdenas. As well, mobile propositions that
used smartphones to activate augmented ecologies were about to enter the field, a
3
selection of which will be discussed in chapter 1. Artistic works that blend physical and
digital space, underscore MR as a vibrant meshwork of experimental practice. Artists
have built upon and added to the many developments in computer vision, tracking,
and alignment, emanating from engineering and computer science, and they have
deployed these in new ways that diversely mix ‘realities’.
In any process that strives for interfacing through code, software takes on an
important yet diffuse role that encompasses the execution, circulation, and adaptation
of code as a material flow across elements. Hence the need to discuss software less as
a command hub and more as an assemblage. As we shall soon see, software
assemblages set in motion oscillatory processes of interfacing that pass through and
between thresholds of the performing/interfacing body, the digital, and the organic.
But before I elaborate on the question of what composes a software assemblage, the
question of what software is in the context of this research must be addressed.
5
What is software?
Our interactions with software have disciplined us, created certain expectations
about cause and effect ... [and] fostered our belief in the world as neoliberal: as
an economic game that follows certain rules. The notion of software has crept
into our critical vocabulary in mostly uninterrogated ways (Chun 2011:92).
and is discussed in this dissertation, exactly where the ‘software’ is located becomes
more complicated as it encounters materialities beyond computational hardware.
Created in resonance with Gilles Deleuze and Felix Guattari’s conception of the
machinic assemblage (1987) which is, variously: a ‘surface of stratification’ lying
between two layers of strata (40); and, a ‘machinic assemblage of bodies, of actions
and passions, an intermingling of bodies reacting to one another’ (88). Deleuze and
Guattari located the agential drive of their machinic assemblage in its capacity to
attract, compose, and re-assemble heterogeneous material flows such as those
comprising people, objects, or energies. Assemblages mesh existing materials together
in unexpected ways, allowing unique connections to emerge in process. The machinic
assemblage emphasises dynamic configurations that iterate differently to produce
temporary arrangements of matter, generating a ‘unity of composition’ out of
‘molecular materials, substantial elements, and formal relations or traits’ (49). It does
not generate material formations that follow an already constituted model nor does it
pre-determine what kinds of materialities emerge from changes in the organisation of
matter:
The machinic assemblage is not simply a novel and inventive structure: it proceeds
from the notion that materiality is emergent in its own shifting trajectories, with
assemblages also having the capacity to re-assemble elements through self-organising
processes. Drawing on the machinic assemblage, the software assemblage is a dynamic
relational system that can facilitate the complex and mutual interrelation of both
physical12 and digital materialities.
To help unpack these ideas about assemblage, that see it as both indeterminate and
an operable material system, I have turned to the concept of ‘agential realism’ found
in the work of Karen Barad. Challenging the notion – from humanism and other
dualist modes of thought13 – that human agency is observationally set apart from the
7
A quantum physicist as well as a philosopher, Barad does not consider that nonhuman
matter is inert, passively awaiting a human hand to provide the agency needed for it
to take shape. Rather, matter is ‘produced and productive, generated and generative’,
activated processually by its own quantum potential (2007:137). Through her agential
realist framework, she disputes the boundary between human and nonhuman forms
of matter, advancing a mode of critical posthumanism that de-centres human agency
by acknowledging the multi-valent agencies of the nonhuman, as a dynamic collection
of entangled forces. To explicate a perspective that advances the software assemblage
approach to MR, human agency must be unwound as the privileged structuring force,
so that consideration might be paid to the transformations between all kinds of matter,
on and off screen. In my research, the agential realities engaged belong to myself as
performer, as well as to the living plants and the shifting movements of code that will
both become intra-active matters/materials that re-assemble the making of mixed
physical and digital space as indeterminate events in a shifting ecology of entities and
relations. The ‘matters’ of code and plants, are seen to performatively enact their
situated and conditional forms of agency, manifest as practices of signal that co-
compose the work. Nonhuman matter is explored for its affective potential to
relationally transform other entities with which it makes contact. In the software
assemblages in this research, such a conception of matter will give rise to nonhuman
agencies – signaletic, computational, and environmental – that beckon an alternative
MR that is at the edge of control, rather than pre-determined as an executable
sequence of events.
Barad’s agential realism pays attention to the actual movements between types of
matter in the world which she terms intra-actions, the structures or apparatuses that
channel matter in a particular configuration, and an understanding of diffraction.14
Such patterns are interferences that matter generates as it passes across bodies or
objects.15 Intra-action is a nuanced alternative to the somewhat programmatic notion
8
Of course, interaction has many more detailed definitions, depending upon the
specific field under discussion. For example, in the field of Human-Computer
Interaction (HCI), interaction is ‘the study of the way in which computer technology
influences human work and activities’ (Dix 2009) 16, while in art history, ‘interactive
media’ is sometimes considered as an ‘outcome of the history of the human/machine
relationship that goes back to the industrial revolutions ... ‘ (Huhtamo 2004:2). In
media design, interaction is figured by an operation (such as touching a screen) that
performs the function of moving an event forward (such as starting a data system). In
all these understandings, a forward or linear movement is suggested, where
interactivity figures at the functional meeting point between human and machine.
Interaction affords a structured entry point to a more broadly based engagement that
might articulate social and cultural forces as well. Intra-action, however, is not figured
by a linear movement, but an entangled and omni-directional one, where different
modalities of matter generate phenomena that re-draw material boundaries.
that allows distinct and differentiated material phenomena to emerge through the
relations set in motion by the various software assemblages in this research.
As well, to develop the idea of agential realism more precisely within my research
practice in media art, I have extended Barad’s argument to include digital matter such
as pixels, data, code, algorithms. This affords a view of augments as data entities with
a material existence and vitality, which become in relation with code and algorithmic
procedures. These ideas are also supported by the dynamic materialist approaches
from theorists such as Adrian MacKenzie (2006), Anna Munster (2006), Brian
Massumi (2011), and Erin Manning (2013). When digital matter is considered as
dynamically organised, it can also be discussed as affective. Melissa Gregg and Gregory
Seigworth in their seminal edited text the Affect Theory Reader (2010), provide a
nuanced and non-prescriptive sense how affect permeates the full spectrum of worldly
encounters:
Gregg and Seigworth identify affect without fixing it in a static definition that might
overly determine what affect might be. They suggest that affect is very much a quality
immanent to most situations and encounters, both human and nonhuman. Affect is of
the body, but also incorporeal: it is active, ephemeral, tonal, textural, expressive and
felt. Critically, it modulates through bodies and situations, contingent and extensive,
it moves with ‘matter of virtually any and every sort’.
Drawing on Baruch Spinoza17, Deleuze and Guattari ask what affects a body is capable
of ‘at a given degree of power’, where power is a relational force in a diagram
(1987:256). Implying ‘an enterprise of desubjectification’, affects are a relational force
that loosens ingrained subjectivity, and beckons bodies toward new embodiments
(270). Bodies have affective capacities, and bring these to bear in assemblages, such
as those generated in this research. For my performing body, a recurring theme will
be the affective potentials it lures – and are lured from it – when performatively
10
interfacing with nonhuman matter and materials. As well, mobilising Karen Barad’s
work concerning the agency of nonhuman phenomena, I will be fielding an elaborated
definition of ‘bodies’, one that reaches outside of the human sphere. In the Tactile
Signal, and Contact Zone performances – discussed in chapter’s 3 and 4, respectively
– living plants will be considered for their affective potential as nonhuman ‘bodies’
that co-compose the software assemblage through their agentially emitted bio-
electrical signals. Allied with this thinking, through materialist approaches to code and
software, I will be articulating an approach to data entities that considers the relations
between physical and digital bodies and the differentiated modes of embodiment they
enact when re-assembled by the software assemblage.
Thinking with these relational understandings of bodies, affect and code – that
themselves think with notions of assemblage – will assist in unlocking augments as
materialities that emergent through movement, rather than static and pre-formed as
informatic overlays. My discussion of the performative relations between
computational forces, the body, and the living plant matter will be further informed
by Donna Haraway’s diffractive approach:
(Haraway 1997:16)
Discussion will shift across theory and practice, where allegiances will be made
between philosophers, scientists and artists, whose thinking and making might assist
with the performative and processual re-configuration of MR via the software
assemblage formulation. Chapter 1 will explore current dominant models of MR
design. I will articulate what I term the informatic overlay approach, linking that to
the notion that the display screen is a mechanism that functions either as a window or
a mirror to the technological virtual.18 Through literature and practice in the related
fields of computer science research, engineering, entertainment, gaming, and media
art, I will take up the question of why the current technologies of commercial and
industrial MR should be encouraged to lean toward more performative practices.
Chapter 3 will take the software assemblage performance outdoors, to the natural
environment of a bush reserve in Whangarei, Aotearoa-New Zealand. There, I will be
examining the potential for a non-geolocative approach to MR that pays attention to
the shifting conditions presented by the environment itself, performing with those
energies. Following on from the Wild Versions (2017), the two Tactile Signal (2018)
performances – Agave Relay and Yucca Relay – will introduce several propositions
that elaborate on the agential contribution of living plants and electrical signals to the
software assemblage. These performances will further question the key assumptions
behind conventional approaches to augmented materialities in MR. Firstly, I will
discuss performances that use a recently developed head-mounted permutation of the
Leap Motion, that ‘looks through’ the infrared camera of that device. This visual
perspective reveals the distorted materiality of the signal that actually maps my
physical hand gestures to the digital augments/hand avatars constructed in the Unity
SDK. Secondly, plants will be considered as elements that generate an alternative
approach to augmented audio. They will be harnessed for their bio-electrical signals,
whose voltage is transposed to a Musical Instrument Digital Interface (MIDI)20
sequence that I modulate with digital augments during processes of performative
interfacing.
Chapter 4, discussing my final exhibition Contact Zone (2018), will take up diffraction
as a material strategy that iteratively re-works the various gestural techniques, code
modules, and modes of signal from previous pieces, across a living ecological
environment transposed to the gallery space. My aim is to dynamically trace processes
of recursive material change as they make patterns of interference across digital and
physical sites, such as those shifts that happen as gestural data passes between my
body, the computational network and reactive plants, or that trouble the consistency
of the sonic signals passing from plants to MIDI via my hand’s micro-gestures as they
materialise digital augments. During a 15 minute performance, I will be combining
both the handheld and head-mounted permutations of the Leap Motion interface.
Embroiled in this new embodiment of body and apparatuses, my performative
interfacing in this last iteration of software assemblage research, thinks with the
14
notion of the relay: I will be both generating and composing with iterative movements
of code and signal that pass through this diffractive network arrangement.
Attempting to beckon MR in media art away from the more conventional approaches
currently being offered to artists by commercial AR/MR, my practice will argue for the
software assemblage as a valuable critical arrangement. An alternative version of MR
that productively adds to the existing field of experimental media art will slowly
emerge from the proposition that, while we might think we know what augments are,
we cannot know exactly what they will do, when explored as affective materials that
shift relations in a software assemblage.
number of years in his Artificial Reality Laboratory at the University of Connecticut (1975-1989).
In its first iteration as an artwork, Videoplace was funded by the National Endowment for the Arts,
and first exhibited at the Milwaukee Art Museum (1975). Retrieved from
https://en.wikipedia.org/wiki/Videoplace (accessed 7 December 2013).
3 Can you see me now? is a collaboration between Blast Theory and the Mixed Reality Lab at the
University of Nottingham. It was first shown at the b.tv festival in Sheffield on 30 November and
1 December 2001. Retrieved from https://www.blasttheory.co.uk/projects/can-you-see-me-now/
(accessed 4 September 2012).
4 Manual Input Sessions (2004) was first performed at the RomaEuropa Festival, Rome, on 28
recognizes their inseparability in ecological relationships that are both biophysically and socially
formed’ (2017:1).
7 Unity SDK, retrieved from https://unity3d.com (accessed 12 December 2013).
8 Dyson is speaking broadly about science as craft, but also advances a significant discussion of
software. He observes: ‘Wherever serious computing was done, young people learned to write
software and to use it. In spite of the rise of Microsoft and other giant producers, software remains
in large part a craft industry’ (1998:1014).
9 In Modest_Witness@Second_Millennium.FemaleMan_Meets_OncoMouse: Feminism and
Leap Motion gestural interface (2010 - present) was envisaged by David Holz in 2008 while
studying for a PhD in mathematics. When I began working with it in 2014 it was desktop use only,
however since 2017 it has been technically extended to operate as head mounted on a range of
virtual reality (VR) headsets. I have recently incorporated the head mounted use to my practice,
and this will be the primary technical of interfacing with augmented materialities in chapter 4.
11 Vuforia SDK retrieved from https://developer.vuforia.com (accessed 9 February 2014).
12 In this dissertation, a physical world is what is termed in computer science literature the ‘real
world’, since this term is highly problematic in philosophical discourse. In this thesis the data-
driven counterpart to the physical is the digital (rather than the ‘virtual’, which also has a specific
meaning in philosophy). Physical world systems, in the software assemblage formulation, include
human and nonhuman forces and materials. These are, in this research, an array of living plant
systems, as well as technical devices such a gestural interfaces and head-mounted displays.
13 Agential realism is expressly a critique of the Enlightenment’s legacy where dualism is a key
construction (1996:179-80)
14 Diffractive thinking will be discussed shortly in relation to the work of Donna Haraway, from
everyday experience. A familiar example is the diffraction or interference pattern that water
waves make when they rush through an opening in a breakwater or when stones are dropped in a
pond and the ripples overlap’ (2007:28).
16 Dix, Alan. 2009. "Human-Computer Interaction". In Encyclopedia of Database Systems, edited
maintained, ‘‘No one has yet determined what the body can do’’ (1959: 87). Two key aspects are
immediately worth emphasizing, or re-emphasizing, here: first, the capacity of a body is never
defined by a body alone but is always aided and abetted by, and dovetails with, the field or context
of its force-relations; and second, the ‘‘not yet’’ of ‘‘knowing the body’’ is still very much with us
more than 330 years after Spinoza composed his Ethics’.
18 In this dissertation, there are two primary modes of types of virtual discussed. the 'technological
virtual', which is the virtual as described in engineering and computer science paradigms, a purely
digital and screen based phenomena. As well, I will be discussing the 'virtual' as described by
Deleuze and Guattari, which can include, but is not limited to, the site of the screen.
19 Educator Linda Candy defines a practice-based approach to Doctoral research as: ‘an original
investigation in order to gain new knowledge, partly by means of practice and the outcomes of that
practice. In doctoral thesis, claims of originality and contribution to knowledge may be
demonstrated through creative outcomes in the form of designs, music, digital media,
performances and exhibition. Whilst the significance and context of these claims are described in
words, a full understanding can only be obtained with direct reference to the outcomes’ (Candy
2006:3).
16
CHAPTER 1
In the industrial or commercial settings that in recent years have become the main
locations where MR unfolds, it is generally not considered that mixings of reality and
the virtual might also occur in the physical space of the ‘real’ world. However, as this
research will contend, there is an undue emphasis on what emerges in screen space:
what is needed now is attention to MR as emerging in physical space as well. In
industrial and commercial applications, augments are pictorial structures for
delivering informatic content to a screen, an approach I will refer to as ‘informatic
overlay’. The informatic overlay approach has been problematic for the transposition
of MR from an engineering context to more culturally engaged fields, since it has
limited the role of engagements that might happen off screen – such as the spaces of
social or corporeal engagement that somewhat silently support MR as technical
medium and indeed propagate its use throughout culture.
18
In this chapter, I will trace the informatic overlay approach from its conception in
engineering paradigms and mainstream computer science research, to its adjuncts in
commercially available products that deliver MR experiences. I will relate that
approach to MR’s major forms of hardware – the smartphone and head mounted
display (HMD) – and their structuring of digital augments as data through the overlay
approach. Furthermore, I will be examining the practices and research paradigms that
deploy digital augments in commercial mediatic assemblages. Since the informatic
focus of augments is rarely interrogated as problematic, the focus of MR research has
been on developing a cadre of technical methods for embedding augmented content,
rather than on questioning the idea of the informatic itself and its manifestation as an
overlay. This chapter will outline the informatic overlay as a structure ensconced
within the Reality-Virtuality Continuum (Milgram and Kishino 1994), as well as
examining its continuing impact in the context of AR/MR. I will be analysing some of
the dominant paradigms that exemplify its use in commercial and industrial media,
before considering a selection of recent approaches from media art that trouble this
notion, extending MR in an experimental direction. The final section of this chapter
will explore some of the ways that artists have re-worked the technologies of MR
through a range of situated practices exploring the actual embodied navigation of
spaces, through methods that are capable of re-figuring augments away from the
informatic overlay paradigm. Imagining MR beyond a technical medium, will prepare
the ground for the following chapter, where performative approaches to augments will
further loosen the informatic overlay approach. Before the informatic overlay
approach can be prised away from augments as a materiality, we need to identify its
salient qualities.
A survey of scholarly literature and practice reveals three significant threads in the
literature and practice of AR/MR. First, found largely in the technical and engineering
literatures, augments are understood as informatic overlays that add digital
enhancements to a physical space. As we shall soon see, this approach depends on and
uses a taxonomy that regulates and classifies – in varying degrees – a ‘real’ world in
continuum with a digital ‘virtual’. The second thread is an approach that operates
through ‘remediation’ (Bolter and Grusin 1999) where AR is seen to re-embody traits
and techniques from previous media such as books and film. The third thread is that
19
Thomas Caudell and David Mizell (1992) coined the term ‘augmented reality’ to
describe the visual and textual layer inflected to the heads-up display (HUD) they
adapted to display virtual information over aeroplanes manufactured at Boeing
(1992:659). There, AR was conceived as a work-enhancing strategy for optimizing
engineering and thus efficiency in manufacturing technologies. Building on that
concept of AR – as a datafied window sitting over the physical world – computer
scientists Paul Milgram and Fumio Kishino (1994) wrote the influential research paper
“A Taxonomy of Mixed Reality Visual Displays”, as a method for classifying varying
combinations of ‘virtual’1 and ‘real’. While several updates and extensions have been
attempted (see Milgram, Takemura, Utsumi and Kishino 1995; Wang and Dunstan
2011), the original article is still the most influential because it contains a reference
scale used as an instrument to measure types of ‘virtuality’. The so-called Reality-
Virtuality Continuum (RV Continuum) was conceived as a spectrum to assist in the
accurate classification of MR on screen displays. There, the category ‘virtual’ is
considered as digitally delivered screen presence, while the ‘real’ is the physical world
in all its dimensions.
Milgram and Kishino’s intention, was to remedy some of the difficulties encountered
by computer science researchers by designing a descriptive model that would situate
technical networks that were materially neither purely digital nor physical. Thus,
designers would be able to succinctly decide to what degree their prototype was either
an augmentation of the real world (AR), an augmentation of the virtual (AV), or
virtually immersive (VR).2 Milgram and Kishino state:
extent of the illusion that the observer is present within that world?”).
The idea was that a high-resolution display would afford the user a more realistic
simulation of the digital augments – and thus an enhanced ‘presence metaphor’ –
underscoring the value placed on ‘realism’ in the design. My argument, however, is
that this approach offers limited potential for the user/participant to engage with the
digital (virtual) in a way that is not pre-determined by the parameters of the
informatic: privileging the informatic places the participant in the position of passive
appraisal, removing the potential to stimulate new modes of embodiment with the
digital. In subsequent taxonomies emanating also from engineering and computer
science, the content of various AR and MR spaces is indexed, framed, and categorised
according to their alignment with technologies of immersion, or lack thereof (Wang
and Dunstan 2011; Ohta and Tamura 2014).3 Yet embodiment is rarely discussed. The
literal use of the RV Continuum’s criteria in subsequent designs has led to the
somewhat problematic understanding that MR involves a linear movement from the
‘real’ to the ‘virtual’: where bodies/objects/environments are transposed from physical
space to screen space, via a video stream that combines computational elements (such
as digital augments) with the ‘real’ (a video image of the human body, for example) at
21
screen. The select examples cited above illustrate major uses of AR as an informatic
overlay, although many more can be found. It is outside the scope of this research to
cover the entire field; rather it is my intention to locate this approach to interaction in
AR as the lineage of an engineering paradigm, which has migrated into the media and
entertainment industry. The following section will elaborate on the engineering
techniques that make the informatic overlay possible and indicate how – in the design
of commercial products – those techniques also flow into MR as an emergent field.
The apparatus that started the HMD/HUD research trajectory in MR, was invented by
Ivan Sutherland: his Sword of Damocles (1968) took up an entire room, had functional
vision from one eye, could only be comfortably worn for a few minutes, and showed
wireframe line drawings (Sutherland 1968:757-759). Yet its radical concept– that
human vision might combine with digital content through a display worn on one’s
head – established a research trajectory for AR/MR/VR that is proliferated today
across many devices. Fundamental for this research trajectory is a pragmatic approach
that aimed for greater efficiency in task completion, helpful in military and industrial
contexts that embrace the notion that the human body would be enhanced by
augmented vision. That idea still drives much AR/MR design, yet as I shall explicate
in chapters 3 and 4 – through my Tactile Signal performances – the head mounted
apparatus itself is a highly problematized device that offers only partial perceptual
enhancements. Current commercial and industrial approaches incorporate the latest
sensing technology to convey sophisticated situated and highly contextual data aimed
at adding clarity to the digital experience (Izadi 2012, 2016; Ohta and Tamura 2014).
However, amidst this endless drive for digital verisimilitude, the reasons as to why we
might want an alternative to a seamless digital experience are less frequently
interrogated.
Engineering handbooks are replete with a wide range of tracking techniques for
AR/MR, including marker-based and image-based tracking, model targets stored in
the Cloud and geo-locational information (Carmigniani and Furht 2011; Kent 2012;
Craig 2013; Peddie 2017). These techniques have facilitated a plethora of AR games
for smartphone, with mobile AR being the largest category of commercial MR use (as
in the gamified examples mentioned above). Product launches attach data, such as a
new car, to real-world objects such as cubes, while QR codes on supermarket cereal
boxes aim to tempt buyers with embedded links to product websites. MR is built upon
24
Writers of technical books frequently laud AR as a science fiction that has become real,
often framing their enthusiasm with an image from the film Minority Report (Peddie
2017:132), or perhaps a quote from the fictive works of popular writers such as Isaac
Asimov, Bruce Goldberg or William Gibson. Popular technology writer John Rousseau
(2017) went as far as constructing his ‘Laws of Mixed Reality’ based on Asimov’s ‘Laws
of Robotics’, interestingly, an approach that is being taken seriously by other writers
thinking along an AR-MR-technofuturist axis (Peddie 2017:17). Rousseau asserts that
MR will be a seamless mix of ambient environment and high-resolution digital
imagery, facilitated by what he vaguely terms ‘future hardware’:
All the HUDs mentioned above overlay augments as information on a real world, and
the only difference between these products involves the details of their technical
methods; for example, localised variations in screen resolution and refresh rate;
stereoscopic projection and tracking methods; screen material type; or, 3D
holographic overlays instead of polygonal models. However, what is already clear is
that these wearables are culturally bound to many unresolved social concerns, such as
privacy.14 From a perspective and practice that is more concerned with questions of
embodiment – such as mine – strategies that deprivilege vision are necessary in order
to re-evaluate these commercial obsessions with high-resolution displays. Such
strategies must factor in artistic methods that attempt to shift MR out of the informatic
25
So, at the foundations of AR/MR as explained via remediation, already the window
metaphor is considered essential to the medium. Following his highly influential and
still widely referenced text with Grusin, Jay Bolter referenced remediation in much of
his practice-based design research, speculating that to develop to the status of
medium, AR would most certainly need to reference earlier media predecessors. For
example, in a multi-authored article he wrote:
Another uptake of AR as a remediation can be seen in Sean Morey and John Tinnell’s
recent book, Augmented Reality: Innovative perspectives across art, industry, and
academia (2017), where AR is contextualised as a mode of writing, fused to older
technologies such as the book, tracing out a new space of design for reading itself
(2017:9). The anthology collects interviews with AR practitioners across industry,
academia and art, in fields as diverse as interaction design, teaching, digital business,
electrical engineering and experimental art.17 Supporting their contextualization of AR
via writing as a historical precursor, physical copies of their book incorporate a form
of marker-based AR: readers scan images embedded in the text, to access augments
that supplement the book’s content. Scanning a book-embedded code triggers
augmented material, such as video interview clips, examples of artworks or media
products.18 Here, augments are approached as information whose content is intended
to enrich the reading experience through an intertextual narrative that passes the
reader from the physical space of the book to the digital space of the overlay. In this
scheme, the idea would be that the act of reading is composed between both physical
and digital sites, through which a conjunctive experience is produced. Yet the process
a reader goes through to access these augments is somewhat inhibitive of a dynamic
relay between the physical and digital.
Attempting to download the application, Aurasma19 to your smartphone, you are re-
directed to another app called HP Reveal, which has recently taken over Aurasma.
The content of the book has been migrated, however, so you are nevertheless able to
find the same augments. Activating the app on your phone, you are required to scroll
past various media while navigating to the publisher’s (Parlor Press) page: a cartoony
promotional poster for the movie Ironman, a salacious cover from OK! Magazine, and
some other promotional material you don’t recognise. It is apparent that Morey and
Tinnell’s book is now imbricated in a flow, whose matter is littered with the flotsam
and jetsam of marketing images. Having passed into a commercial mediatic
assemblage, the book is no longer simply a hybrid augmented form, as perhaps was
the intention of its editors. Since the print book takes a perspective that blends
industry, academia, and art, it seems likely that the idea was to produce a novel and
medium-appropriate update to the traditional book, whose design would enact the
editor’s central argument: that AR provides an enhanced design space for writing.
28
Another influential approach that has absorbed remediation strategies can be found
in work emanating from the EPFL+ ECAL Lab in Lausanne, Switzerland, where
engineer-designers Nicholas Henchoz, Vincent Lepitit, Pascal Fua and Julien Pilet
combined new advances they had made in computer vision with the idea that AR
needed to embrace communicative simplicity and ease of use in order to become
creative. Developed through scientific research, demonstrations and specialised
conferences since 1992, their chief contention was that if AR were to shift beyond its
status as simply a ‘technology’ to take on the status of ‘medium’, it would need to
communicate as a ‘dedicated visual language’ containing attributes of grammar,
syntax and the potential for developed narratives (Henchoz and Lepetit 2011:85).
Elements from the real world would be doubled to instantiate a consistent semiotic
flow between physical and digital. For example, Camille Scherrer, a member of their
laboratory, developed the artwork Le Monde des Montagnes (2008), an installation
that used the older medium of paper cut-outs as a trope to segue between a physical
book and its digital augments.
While the theoretical approaches outlined by Henchoz and Lepetit (2011) and explored
in the practical research output of the EPFL+ ECAL Lab are less sequential than the
historical rollout of remediation (as it appears for example in Bolter and Grusin’s early
discussions), they do express some shared concerns. For example, the idea in
remediation that achieving useability by leveraging shared and familiar medial
elements will assist in drawing an audience, resonates with Henchoz and Lepetit’s
principle of creating visual consistency between real and virtual worlds. Furthermore,
29
the models of interface proposed by Bolter and Gromala (2003), where interface is
either window or mirror, also holds in EPFL+ ECAL research, in regard to the notion
of doubling the real world into the virtual to simulate a consistent semiotic flow.
EPFL+ ECAL produced the influential AR touring showcase, Gimme More (Eyebeam,
New York 2013).20 There, Henchoz stated:
We can interpret what Henchoz describes as the ‘interdependence between the object
and the information’ conveyed, through the idea that augmented content is fused to
actual objects in a logical semiotic connection. For example, TattooAR (2013), an
artwork by Cem Sever from the Gimme More exhibition, placed augmented tattoo
designs on the bodies of participants. Through an altered image stream – that showed
a visitor’s body with a tattoo augment – participants were afforded a view of their
tattooed ‘self’, using a large screen as a mirror metaphor.22 Parallels exist with
Snapchat’s use of the augmented mirror technique (discussed earlier in this chapter),
to insert the body to a fixed framing. Since this artwork is at human scale, rather than
on the small smartphone screens of the Snapchat app, we might also consider what
sense of digital embodiment it might convey for the participant. If we do this, however,
we would also need to inquire as to the quality and degree of the embodied experience
that is generated. Since this re-presentation of corporeality is pre-programmed:
applied to multiple bodies of whomever encounters the experience, there is limited
scope for the participant to shift the sense of embodiment they perceived in the magic
mirror. Whilst they might express wonder at their body’s new ink, this is the limit to
the experience itself.
In the examples and case studies described above, there is a strong tendency for
augmented content to appear as pre-programmed, called up from a server. It does not
modulate or shift with the environment or with the participant in the experience. In
the section following, we shall work through a third major approach to AR/MR, by
looking at a selection of experimental artworks. Emanating from artistic culture and
theoretical concerns, augments are less restricted here by the need to convey
30
structured information. Digital overlays are articulated more for what they can do
rather than any metaphorical content such as ‘presence’ they might carry. While at
times utilising presence metaphors to connect digital and physical, AR/MR by artists
radically skews the informatic overlay approaches taken by the engineering and design
examples discussed above. Augments are not there to facilitate literal or mimetic
connections with an object or a body. Rather they are situated as critical and playful
forces that begin to work with and open-up questions of embodied and performative
practice.
and virtual in opposition to one another rather than fostering a more relational system
(Benford and Giannachi 2011:3). They describe the RV Continuum as a ‘largely
mathematical and technology-centric’ method of ‘constructing virtual spaces’ in order
to align them with physical space (2011:43). In response, they offer the notion of
‘trajectories’, where the participant enacting an artwork moves experientially through
real and virtual online worlds that are partially pre-scripted, and partially self-
generated. The pioneering MR participatory performances Uncle Roy All Around
(2003)23, and FlyPad (2009)24 by Blast Theory weave together theatrical performance,
online and real-world environments, audience participation, role-playing, and data
extraction in complex arrangements that unfold mutually across the digital as well as
the physical (Benford and Giannachi 2011). Carving out a new genre of MR
performance, Blast Theory’s contribution to a performative MR, advances a less
digitally privileged mix of realities, where participants are given cues by the artworks
that send them off on exploratory trajectories that pass through ‘hybrid space’.
Referencing Gilles Deleuze’s notion of the ‘fold’ they describe hybrid space as:
For example, in FlyPad (2009), a camera placed over a gallery’s atrium framed a wide
orthographic view of the space below. In this frame – a ‘flying area’ – visitors were able
to take on the identity of a winged avatar (a brightly coloured insect), whose movement
they controlled using a footpad. Flying across and through the atrium, the flyers could
join together, performing new movements by melding their avatars, or remain
separate but fly with less vigour. Prior to playing the game, as they walked around the
gallery, data was extracted from participant’s movement by way of a Radio Frequency
Identification Device (RFID) tag, and this data was added to their digital avatar in the
FlyPad game. Incorporating a participant generated trajectory with pre-figured
elements, produced an iterative artwork that could never unfold the same way twice
(2011:138-141). While Blast Theory’s work does not directly relate to my practice-
based approach since it relates more to the potential of MR as a transmedia
storytelling medium, it illuminates the need for more performative versions of MR,
influenced by relational assemblages rather than technocentric formulations.
32
Following the thread of dissent toward technology driven methods that occlude the
body and align corporeality with a pre-figured data system, leads us to artist generated
approaches that use different techniques to combine corporeality with augmented
digital worlds. In John McCormick and Adam Nash’s Reproduction – an artificially
evolving performative digital ecology (2011), autonomous agents helped to generate
embodied relations that re-sited image/colour data from human interactors to an
adapting digital ‘life-form’. Extracted using motion capture technology, the results of
the results of these hybrid reproductions – as they adapt in real time in response to
feedback – were presented as a full scale projection in an immersive room. The impact
of this mutation on the visitor is discussed as provoking a multi-sensory state of
‘contemplative interaction’ (Riley and Innocent 2014; Riley and Nash 2014), and it is
noted that this understanding differs from conventions that frequently situate MR as
either ‘reactive or distracting’ (Riley and Nash 2014:260). Contemplative interaction
offers itself as a method that examines ‘notions of affect that relate bodies, locations,
spaces and codes across the physical and virtual’ (Riley and Nash 2014:263). Again, a
much more complex figuration than offered by the RV Continuum, and one that allies
itself with Deleuzian notions of affect and embodiment (Riley and Nash 2014:261).
As physical bodies passed by one another – in the illuminated darkness of the tactile
and tractional sand – they passed over and against digital augments as light
projections of the bodies of others. This relay of projections formed a recursive loop
33
This happens via the virtualization of their bodies, but also via the emergence
of a complementary interplay, in their embodiments and the real environment,
among locative media, signaletic telepresence, and ubiquitous computing
(2012:18).
Ekman notices a blurring of distinctions, such as between public and private space via
the virtualization of bodies. Like Sandbox, the technical operation of Underscan
involved luring people into a pre-prepared space embedded with sensors and
surveillance equipment, where their movements would be tracked and used to trigger
corresponding movement image sequences. In the case of Underscan, the projected
sequences were from pre-recorded films, deployed on the pavement as augmented
overlays. In the case of Sandbox, the projections on the sand unfold in real time ,
reflecting advances in tracking technology made in the seven years between these
works. Looking over to the sandbox from the sand, participants could see others
playing with their images, and they could do the same with the large-scale hands
projected on them.
Sandbox’s project description lays the power relations and affective conjunctions of
34
The project uses ominous infrared surveillance equipment not unlike what
might be found at the US-Mexico border to track illegal immigrants, or at a
shopping mall to track teenagers. These images are amplified by digital
cinema projectors which create an animated topology over the beach,
making tangible the power asymmetry inherent in technologies of
amplification.29
This ‘animated topology’ approached power relations as matters of scale: where giant
hands manipulated tiny people, yet the tiny people were also able to overturn that
relation and become the giant hands. Spaces are alluring and carefully composed,
drawing humans toward their dynamism – such as movements of light and data –
latent with affective potential. Manuel DeLanda (2007) suggests that human
experience in Lozano-Hemmer’s artworks, is largely produced through ‘expressive
spaces’ activated by underlying nonhuman energies. In such spaces, code is not simply
executable but is affective, ‘long series of ones and zeros that ultimately embody the
software animating the hardware’ (DeLanda, 2007:104). Establishing a giant structure
populated with surveillance cameras, a bespoke tracking system, and ultra-high power
projectors, Lozano-Hemmer generated an expressive space for humans to affectively
perform within.
All the artist-led approaches described above convey the idea that the physical and
digital spaces of MR are actually not separate but meet through oscillatory movements
of bodies and data. In such assemblages, the materiality of the digital is not only
affective, but further suggests that senses of embodiment operate between human and
nonhuman. this affords the broad perspective that data and the corporeal might
mutually co-constitute one another. Clearly, such approaches are markedly different
from the engineering and computer science paradigms discussed earlier in this
chapter, that would fix MR as a matter of technical arrangements on a display screen.
Lanfranco Aceti’s and Richard Rinehart’s (2013) edited special edition of Leonardo
Journal, Not Here Not There, was the first comprehensive survey of AR/MR as an
artistic category. Framing a quote from the Manifest.AR collective30, Rinehart
summarizes some of the provocative issues raised by mobile AR:
35
Sited art and intervention art meet in the art of the trespass. What is our current
relationship to the sites we live in? What representational strategies are con-
temporary artists using to engage sites? How are sites politically activated?
(Rinehart 2013:9)
This collection focussed on mobile AR deployed through geo-location, since this was a
popular artistic movement at the time. Soon after Aceti and Rinehart’s collection,
Vladimir Geroimenko’s text Augmented Reality art: from an emerging technology to
a novel creative medium (2014), became the first book to systematically analyse the
artistic threads coming out of AR as a new medium. Contextualising that study,
Geroimenko reproduced in full the Manifest.AR manifesto (Freeman et.al. 2012),
leading him to argue that what differentiates AR from other emergent media forms
such as ‘virtual reality, Web art, video, and physical computing’ is that it is bound up
with an activist politics that re-purposes technologies like mobile phones as radical
artmaking devices (Geroimenko 2013:vii). Moving away from the restrictions imposed
on AR as medium defined by the informatic overlay or by remediating older media,
artists working with mobile AR have forged new critical pathways, such as those
described in Geroimenko’s collection. Mobile AR – popularized by commercial
products such as the smartphone games described earlier – in media art takes geo-
location to activist contexts.
Mobile augmented reality art (MARt), emerged as cultural force in AR from about
2010, with the influential Manifest.AR group founded on January 25th, 2011,
following their ground-breaking guerrilla exhibition/intervention in the Museum of
Modern Art, New York, We AR in MoMA (2010).31 There, organisers Mark Skwarek
and Sander Veenhof conspired to stage an exhibition of augmented art without the
permission of the gallery, and got away with it. Holding tours of their artworks, the
show went under the radar of the gallery authorities, and inaugurated a new
movement in activist and interventionist installation. Subsequently, the group staged
other interventions where geo-location was used to surreptitiously place augments at
canonical and politically loaded sites, such as outside the New York Stock Exchange
during Occupy Wall Street (Skwarek 2014:3), and at the Venice Biennale 2011 and
2013 (Thiel 2014:31). During Occupy Wall Street, the area in front of the New York
Stock Exchange was off limits for protestors, yet the Manifest.AR group were able to
stage a ‘flash mob’ waving smartphones rather than obvious signage (Skwarek
36
2014:17). While augments still operate as informatic overlays, they are skewed to
critical ends by an activist culture. A deeper knowledge of how information operates
via layering in AR, also featured in some of this work. For example, by hosting their
work on private ‘layers’ in the app Layar32, Manifest.AR avoided the marketing noise
made by commercial mediatic assemblages. Such strategic uses of augments in
conjunction with mobile devices, not only encouraged a critical turn in thinking about
the informatic overlay, but also implied a more extensive concept of embodiment.
In Tamiko Thiel and Will Pappenheimer’s Biomer Skelters (Liverpool 2013 and
various iterations), a participant walks around a pre-determined area of a city with
their smartphone. As they move the camera sensor, they see an array of digital plants
appear on their phone screen. The participant holds a Zephyr heart rate monitor to
connect a bespoke smartphone app to their heartbeat.35 The frequency of the signal
generated by the beating rhythm of their heart is converted into the augments – virtual
plants – that populate the ‘biome’, a term borrowed from biology that describes a
community of plants and animals living together in a ‘congruous ecology’(Woodward
2008:2). In this case, the biome is an urban landscape populated by data, plants and
bodies.36
Their power lies in the interactions between the two spaces – between vision
and hearing (what users are seeing and hearing), and between present and
past... (Manovich 2006:226).
By directing the participant toward conflicting perceptual zones, Cardiff’s work shifts
conventional preconceptions about that space, transforming relations between
participant and site. Janet Cardiff and George Bures Miller’s the City of Forking Paths
(Sydney, 2014-2017) places the participant in a situation where they must follow the
audio-visual logic of an AR embedded video, along the exact cartography set out by the
narrative. Participants play a video on their smartphone and follow along with the
artists’ shamanic audio-visual narrative as it meanders through The Rocks district in
Sydney. Required to trace multiple narrative flows at the same time, the participant
must attune to the work as it unfolds: the video stream playing constantly on the
phone’s screen; the binaurally recorded narrative that played through headphones;
and, the parallel ‘reality’ of the street experience during the walk (Wright 2018b).
These nuances are less narrative than embodied: they must pause and sit, walk, take
multiple turns, follow a tunnel under a street, and so forth. Confronted with at times
startling imagery on-screen — a phalanx of office workers dimly lit with mobile
38
phones, a gagged man with duct tape over his mouth wearing a straight-jacket in
Miller’s Point — the participant must perceptually negotiate a parallel flow of
experience between physical and digital. If they deviate from the ‘forking paths,’ they
lose their place; for example, by taking a turn down the wrong street, they are cut adrift
from the artwork, and the co-emergent experience of physical and digital experience
is broken. In this way, the work operates alongside each person’s sensory
apprehensions and habits, foregrounding the role of the body in producing a MR
experience rather than being framed by technical constraints such as screen,
transmission and resolution.
As I have suggested above, there are many artistic deployments of MR that create more
complex relations between digital and physical spaces than is found in commercial and
industrial arenas. Hence, we require some concepts drawn from what I will broadly
call materialist approaches to computation and the interface. A materialist approach
to the interfacing of the physical and digital explores the contingent relations between
all elements (human and machine); how these reconfigure and re-assemble in situ and
across the specific spatiotemporal phenomena that take place while interfacing. Here,
looking ‘beyond buttons’ is a critical practice, of which Anderson and Pold (2011) note:
... investigation of the interface does not stop at the computer’s surface but
goes beyond the buttons and reaches ‘back’ into history, and ‘through’ to the
human senses and perception, ‘down’ into the machine, ‘out’ into society and
39
Alexander Galloway (2013) and Brandon Hookway (2014) have posited non-
instrumentalist accounts of the interface that uncover its neglected social, political and
networked conditions. In the Interface Effect (2013), Galloway explores the interface
not as an object (or a creator of objects) but an ‘effect.’ ‘Interface’ is the structure by
which a centre relates to an edge that it seeks to control in order to extract value
(2013:41). Galloway argues that interfaces – such as those found in gaming and
entertainment industries – are imbricated in systems of value extraction: the game
itself bears the stratified imprint of an ideological structure. For example, in an
analysis of the massive online game World of Warcraft (Blizzard Entertainment
2004), he argues that the commerce and economics of the frequent in-game purchases
one must make in order to ‘battle’ successfully, mirrors everyday life in a neo-liberal
capitalist society (Galloway 2013:42-44).
control of the system, but only ever approximately achieving it, the interface opens the
internal elements of the computational system to external potentials in operations of
emergence that create assemblages (44).
Approaches such as those by Galloway and Hookway, are useful for artistic approaches
to MR, since they apprehend interfaces as more than tools that afford access to a
technological virtual. As analysed earlier in this chapter, approaches to augments as
an informatic overlay, are themselves implicated in systems of value extraction,
product marketing, and work efficiency. Re-positioning the interface from its object-
like status as a surface (or surface of a ‘thing’) to a more open relational assemblage,
acknowledges the complex interrelations that enmesh concrete physical and digital
virtual world systems. Thinking MR as a dynamic interface that intra-actively
negotiates with other assemblage elements across the physical and digital, allows new
kinds of practices that open MR to new and emergent performative materialities.
Mobilising concepts from corporeal and cultural theory— such as agency, materiality,
and sociality — Adrian MacKenzie has argued that data structures and their elements,
as well as code and coding practices are performative and need to be addressed in their
singular operations and functions, rather than as transmitters of ‘meanings’ or
executors of commands (2006:75). MacKenzie does not explore code as an object (with
fixed form and bounded parameters), but as a nexus that sets in motion a series of
interconnecting affects and relationships across technical and social networks:
MacKenzie’s analysis reveals software, algorithms and data structures as not only
technical (and stable) elements but as highly provisional, individuated, sociocultural
events, operating through their connections and couplings, across human and
nonhuman networks that make as well as break connections. Code is more a mode of
sociality that forms networks of agency and new possibilities of embodiment with and
for its users.
David Berry (2011) has a useful conception of code as ‘computational logic located
within material devices’ (63), where code produces a series of materialities conjoining
the activities of the end user, the creative writing of the programmer, and the devices
that run executable commands. Together these engender code as a relational system,
which can be deployed in any given cultural milieu, with quite specific affects and
effects. For Berry, when embedded within technical devices, code takes on an agential
role, articulating the nuances of the software medium and linking those nuances to
autonomous agents, applications, and user behaviours. Code is located through its
material relations with both software and corporeality (experienced both individually
and collectively via participation), enabling a conception that extends far beyond a
series of executable commands within a programme. Deployments of assemblages –
whether these be of the interface or code – are significant for this study and afford
diverse material elements the capacity to coalesce according to their own affordances,
intensities, flows and attractions. The assemblage itself then becomes a re-
configurable morphology that actively resists structuring or engineering as a limited
technical operation: its elements continually prehend a desire for shifting,
differentiated re-assembly.
nonhuman forces that prehend affect and beckon human bodies toward senses of
embodiment that are spontaneous and emergent with a computational network. This
point will be examined more closely in the next chapter, when I explore augmentation
as relational to intra-action in the software assemblage formulation.
Now that we have identified the crux of the problem with treating digital augments as
an informatic overlay, we can embark on a trajectory that morphs augments into a
performative digital material. Instead of accepting the established interaction
paradigms from engineering and computer science discourse, this research will
develop a critical posthumanist approach where augmented materialities emerge
through processes of intra-action that imbricate my performing body, choreographic
data objects, living plants, custom designed software, and hardware sensors: my aim
will be to assist in the materialization of various mixes of reality, that emerge off
screen, as well as on. Moreover, I will test – through experimental art practice – the
software assemblage as a formulation that might offer the potential to extend MR
beyond the informatic overlay approach and gesture toward the vitality of an affective
world space. Developing techniques of performative interfacing that challenge the
conventional informatic overlay approach, we shall attune our senses to the potential
of emergent modes of embodiment that arise when digital augments become
performative.
1 At different points in this thesis, the term of ‘virtual’ will be approached both in a philosophical
and a technological sense. The following section speaks to its technological sense in computing,
where the virtual is ‘that which is simulated by computer technology’ (from the Oxford English
Dictionary online). The philosophical ‘virtual’, as a concept from Gilles Deleuze, will factor more
in chapter 2.
2 As identified by Milgram and Kishino (1994), there are two technical ‘types’ of MR: Augmented
Reality (AR) and Augmented Virtuality (AV). This thesis is not concerned with AV, since analysing
the augmentation of technologically virtual worlds would require adopting an entirely different set
44
of strategies and methods. My research focusses instead on the mixings of reality and the digital
that speciate from AR and have migrated to MR.
3 While there has been much technological and historical development since 1994, the concept of
the RV Continuum is still widely applied. New taxonomies have been made, accounting for
emergent technical developments. For example, Wang and Dunstan note in their more recent MR
taxonomy for industrial applications: ‘The goal here is to minimally modify and complement,
rather than replace, Milgram’s original MR continuum by considering media representation
features and tracking technology, and discussing the input and output features separately’
(2011:495).
4 Retrieved from https://en.wikipedia.org/wiki/Wikitude (accessed 9 February 2014).
5 See Simon Perry (2008-10-23). “Wikitude: Android App With Augmented Reality: Mind
AR game from Niantic Labs. Much of the mechanics for the game were based on an earlier
application called Ingress (2013), where (again) two factions compete to secure virtual portals and
takeover real territory. https://en.wikipedia.org/wiki/Ingress_(video_game)
7 While AR mode is only available on some phone models, the game is known primarily as an AR
game. https://support.pokemongo.nianticlabs.com/hc/en-us/articles/115015868188-Catching-
Pokémon-in-AR-mode-iOS-only-. For example, Vladimir Geroimenko’s forthcoming edited
collection (Springer 2019) specialises in Pokémon GO as an AR paradigm.
8 Retrieved from https://www.snapchat.com (accessed 2 July 2017).
9 Retrieved from https://lensstudio.snapchat.com (accessed 13 February 2018).
10 Additionally, there have been issues around corporate data collection, and responsibilities
toward data privacy, where it seems that games such as Pokémon GO and Ingress are being utilised
to gather user information through both geolocative data and computer vision. See David Meyer’s
article (20th July 2016), http://fortune.com/2016/07/20/pokemon-go-germany-privacy/as well
as Kate Conger (2016) “ Niantic responds to senate inquiry into Pokémon GO privacy”, retrieved
from https://techcrunch.com/2016/09/01/niantic-responds-to-senate-inquiry-into-pokemon-
go-privacy.
11 Full article here: https://www.artefactgroup.com/articles/mixed-reality-without-rose-colored-
glasses/. Ironically, while claiming to be the opposite, the article is utterly rose-coloured.
12 Retrieved from https://www.magicleap.com (accessed 2 August 2018).
13 Retrieved from http://blog.leapmotion.com/northstar/ (accessed 1 August 2018).
14 The resistance that many potential buyers have toward HUD wearables is manifest in the sharp
decline of the much heralded technology, Google’s Glass. While such issues are outside the scope
of this research, the examples described underscore the point that the informatic overlay should
not be understood in a pure technical sense: wearables are culturally and socially entangled,
implicated to practices that extend beyond an informatic context. The commercial mediatic
assemblages of Silicon Valley, driven by neo-liberal profit, frequently fail to interrogate the
questionable practices that such wearables generate, occluding the relevance of the unexpected
behaviours that co-emerge with these apparatuses. For example, hoping the tide of dissent for
Glass may have ebbed, Google is relaunching the product, this time with AI capacity.
https://www.wired.com/story/google-glass-is-backnow-with-artificial-intelligence/ (accessed 20
September 2018).
15 They note: ‘What is new about new media comes from the particular ways in which they refashion
older media and the ways in which older media refashion themselves to answer the challenges of
new media’ (Bolter and Grusin 1999:15).
16 Bolter and Gromala attempt to distinguish between the window and the mirror, where
VR is seen as following the Renaissance paradigm of the ‘window,’ attempting to provide the viewer
with a seamless perspectival experience, and AR is ‘reflective’, in their text, a more contemporary
metaphor (128). However, as AR/MR practice has played out, actually it is both window and mirror
are tropes that structure the informatic overlay. This is especially apparent when analyzing AR on
smartphones and tablets where the screen itself is a pervasive window.
17 Unfortunately, while the book is published in 2017, all of the material from the experimental art
chapters – the area of interest to my study – is created before 2012, and additionally repeats
artwork discussions contained in Vladimir Geroimenko’s (2014) text Augmented Reality Art. For
example, EGG AR: Things We Have Lost, by John Craig Freeman, a range of AR interventions by
45
Vuforia and Jay Bolter discussing Argon, while others triggered artwork documentation such as
BC Bierman’s Miami Wynwood Walls AR graffiti mural project (Morey and Tinnell 2017).
19 Aurasma was a proprietary application for smartphone and tablet used to create augments
(known as ‘auras’) and attach these using image markers to physical objects.
https://www.aurasma.com.
20 Presented at the Eyebeam Art and Technology Center in New York (February 21-March 2, 2013)
Gimme More: Is Augmented Reality the Next Medium? was directed by Nicolas Henchoz.
21 Retrieved from https://www.eyebeam.org/events/gimme-more-is-augmented-reality-the-next-
around-you/. Mixed media artwork performed in various locations in London, U.K. Premiered at
the Institute of Contemporary Arts in London in June 2003.
24 See project description retrieved from https://www.blasttheory.co.uk/projects/flypad/. Site
specific AR artwork designed for the Public Gallery, West Bromich, England, 2009.
25 Created for Glow, Santa Monica Beach, Santa Monica, United States, 2010.
26 Technical information from project description retrieved from http://www.lozano-
Mark Skwarek, Sander Veenhof, Tamiko Thiel, Will Pappenheimer, John Craig Freeman,
Christopher Manzione, Geoffrey Alan Rhodes, and John Cleater.
31 Veenhof and Skwarek invited artists Patrick Lichty, John Craig Freeman, Tamiko Thiel, Will
Pappenheimer, and Christopher Manzione to create augments that were then geo-located in
MoMA using the application Layar. Augments remain live in the museum. Documentation and
project link here. http://www.markskwarek.com/We_AR_in_MoMA.html
32 See https://www.layar.com. (accessed 21 April 2013).
33 Thiel. Tamiko, & Will Pappenheimer. 2013- ongoing. This artwork was first staged
at FACT Gallery (Liverpool) then subsequent major iterations at ISEA2014 (Dubai) and
Virtuale Festival (Switzerland). http://www.biomerskelters.com/(accessed 18 April 2014).
34 Cardiff, Janet & George Bures Miller. 2014. the City of Forking Paths, Augmented Reality app
achieve a relaxed heartrate, which then triggers the propagating process. Competing with another
team to propagate the most plants, physiologically-driven data etches pathways into the city across
a physiologically-driven data network.
36 Affective computing has long experimented with limited aspects of embodiment: Thiel and
Pappenheimer received their technical guidance from scientists researching affective computation
at John Moores University. The collaboration between John Moores’ and Thiel and Pappenheimer
was organised by FACT Gallery, Liverpool. https://www.ljmu.ac.uk/research/impact-
achievements/bio-sensing-meets-art. (accessed 16 April 2015).
46
CHAPTER 2
While the placement of a digital overlay in screen space might technically define MR
from an engineering or computer science point of view, the technical capacity of a
medium need not define the practices developed by other fields, such as media art.
Technical devices can always be utilised in multiple ways, and methods can be
explored that shift the intended uses of hardware and software in directions that
extend or alter the original design. The particular versions of MR produced by this
47
I will be examining the shifting relations that entangle the key material elements in
the software assemblages in this chapter, via Barad’s concepts of intra-action and
apparatus. This will also challenge the dominance of largely visual methods for
apprehending augmented content, outlined in the previous chapter. Brian Massumi’s
notion of semblance – in which the visual itself is understood as a modality already
48
Drawing on Deleuze and Guattari, the virtual is a kind of conditioning field (a ‘plane
of consistency’) that ‘opens a rhizomatic realm of possibility effecting the
potentialization of the possible, as opposed to arborescent possibility, which marks a
closure, an impotence’ (1987:190). It cannot be physically accessed as such, although
the virtual is also immanent to any actuality.5 Actualisations of the virtual emerge
49
Entangled, the potentials drawn out of strata by the machinic assemblage coalesce on
the plane of consistency, where they generate affects between modes of matter in
movement. Andrew Murphie undertakes a reading of the virtual from a Deleuzo-
Guattarian perspective that questions the conventional positioning of VR as a zone for
the enactment of mimesis and digital simulation (Murphie 2002). He argues that VR
technologies are accompanied by a shift from an interest in representational spaces to
questions of operation. Murphie argues that if VR is taken as an opportunity to
generate ‘co-extensive’ connections between the virtual and human perception
(Murphie 2002:8), it has the capacity to offer relations with nonhuman forces and
intensities that might otherwise be imperceptible.
closely.
Leap Motion –anchors itself to the physical hand in an apparently seamless flow.
However, this can be ‘unjoined’ and made to slip out of synchronicity through
disruptive gestural techniques, and by exploiting technical idiosyncracies such as
‘signal inertia’ (discussed in chapter 3). Causing the hardware interface to track
differently, through my performative and choreographic interventions, generates a
new relational diagram that breaks open the original interface design: this not only
shifts the materialisations generated, but also shifts the intended interface design by
manifesting a mutant use case.
In the previous chapter it was argued that the materialist approach taken by the
software assemblage would allow a more performative version of MR to emerge
outside of the taxonomic structuring of the RV Continuum. To facilitate a re-worked
MR, this research utilises a technique where augments and corporeal gesture generate
a field that is intra-active. Technically, the design operates through the Vuforia AR
extension within the Unity SDK, with the Leap Motion gestural controller used to track
the human hand. This networked configuration de-privileges the frame of the screen,
since the AR camera does not seek objects with which it would surround via a
symmetrical pictorial frame (as in image-based tracking), nor does it seek to use
computer vision to identify a fixed layout of data points such as a QR code or other
physical marker. Instead, digital augments/hand avatars are tracked to the gestures of
a physical hand, whose fluid corporeal movements are transposed to the digital space
of Unity through the infrared sensors of the Leap Motion: the infrared signal will take
on a greater importance in chapters 3 and 4, during the Tactile Signal performances
(2018), and in my final performance piece, Contact Zone (2018). The digital
augments/hand avatars are made visible in the Unity SDK by the Vuforia AR
extension. Later in this chapter, I will bring Barad’s work on apparatuses, diffraction
and material-discursive practices to bear on the software assemblages Tactile Light
and Tactile Sound.6 Before turning to a more detailed exploration of these
instantiations of the software assemblage, I want to bring my research into contact
with an artwork by Golan Levin, Chris Sugrue and Kyle MacDonald, that likewise
suggests how corporeal and augmented hands might function intra-actively, as
relational emergences.
52
Approaching a screen, a gallery visitor places their hand in a small black box.
Confronted with a screen image of a virtual hand that confuses the indexical image of
their physical hand, they must perceptually wrestle with a relational mixing of ‘spaces’
unfolding. Participants are faced with a series of digital permutations that seem to defy
the actual physical state of their hand. The Augmented Hand Series (Levin, Sugrue,
MacDonald 2014) re-constructs, in real time, a participant’s hand, adding or
subtracting, warping and variously skewing the familiar territory of one’s own body.
Once in the box, the hand is tracked by a custom-designed AR system, that uses the
Leap Motion interface to provide gestural data. Real time tracking allows a tight
connection between the original physical and adapted digital hand, with participants
able to articulate all fingers, even when there are six or four in the screen space.
From the responses at the Cinekid Festival where the work was first shown, we can say
that perception has been abruptly challenged, and that the unexpected experiences
provided by the new modelling of hands in digital space problematize the acts of
recognition apprehended by the participants in the experience.7 Conjoining mutated
hand models with the unique hands of various participants, produces a shock to
proprioception, and a desire to unlock the secrets that this strange machine offers to
the usual eye-brain-hand configuration. The challenges to perception posed by this
augmentation places pressure on people’s capacity to perceptually process such
unexpected intra-active becomings as they are occurring, even though after the fact
they are clearly aware of the computationally generated disruption. Through these
astounding intra-actions, new experiences of the body emerge not only in digital space,
but through an adapted perception in physical space as well. With Barad, we might say
that both apparatus and the body (as corporeal and digital) are entangled, intra-
actively co-constituted in the same moment: when a participant places their hand in
the box, they become entangled in a material-discursive practice, that actually re-
draws the boundaries of their own embodiment. The artists are establishing the
conditions for the participants to feel new modes of embodiment, where perceptions
of the body as a discrete and bounded entity are challenged:
Can real time alterations of the hand’s appearance bring about a new perception
of the body as a plastic, variable, unstable medium? … . the Augmented Hand
53
The Augmented Hand Series not only articulates a myriad of techniques for
transposing hand gestures into digital space, but further presents a situation where
the participant can experience a dislocation from the habitual perception of their own
body. Emerging in flashes of striking reconfiguration, these moments are not mere
entertainment. Rather, they explore the potential of the body to shape itself differently,
to articulate alongside digital adaptations. Arriving with their ordinary visual and
proprioceptive experiences of their hand, participants leave with an embodied
memory of a digitally adjusted corporeality. The device that allows the Augmented
Hand Series to generate these apparently seamless re-configurations is the Leap
Motion gestural interface. In the next section, I explore how the Leap Motion, when
coupled in different ways to other technical and corporeal aspects and gestures in the
software assemblage, might assist in further exploring MR.
The Leap Motion is a proprietary gestural interface that tracks hand position, three-
dimensionally, in physical space to enable augmented representation of this
movement in AR/MR/VR. A small rectangular hardware device, it houses three heat
sensitive, infrared cameras that capture images from the electromagnetic spectrum, a
capacity that will be explored more closely in chapter 3. As a hand is raised above the
Leap Motion’s infrared sensors, an image stream is captured and relayed to the Leap
Motion software: this signal is then passed to a Software Development Kit (SDK) — in
my case the Unity SDK. Using infrared to trace an outline of the physical hand’s
gestures and obtain movement data, the Leap Motion’s software turns the infrared
signal/image stream into a map of the human hand, which provides the control point
for any subsequent interactions with a screen space.
The Leap Motion began its working life as a desktop interface, and in recent years a
Head Mounted option that attaches the gestural controller to a range of commercially
54
In ordinary use, one does not touch the Leap Motion. Whether mounted on the
desktop or an HMD, the device is not intended to be in tactile contact with human
hands. This is because tracking is only possible under certain restricted conditions,
such as placing the interface on a flat surface (Guna et.al. 2014). To enable the Leap
Motion to become part of an experimental software assemblage in which intra-action
was enabled, a change in its relation with the performing body was necessary; a
change, from a computer vision- oriented use achieved primarily through the capture
of infrared electromagnetic signal to a performativity that also included tactility.
In the software assemblages that follow, the Leap Motion interface is handheld rather
than desktop mounted. As hand held, embracing gestures that are atypical – such as
will be shown soon in Tactile Light and Tactile Sound – the device is in contact with
the corporeal body. Hence, it is affected by the modulations and rhythms of gesture,
rather than being only optically-oriented for its intended purpose, to track the hand.
Holding positions shift the data as it is captured, since distance, position and
orientation of the hand are fluid, loose and mobile. Taking inspiration from avant-
garde choreography by Merce Cunningham and Yvonne Rainer, my hands will be
enacting micro-gestural techniques for movement, described in more detail later in
this chapter. These techniques differ enormously from the ninety-degree angle that
ordinarily positions the Leap Motion adjacent to the hand in desktop use. Using this
device as hand held, provides a more extensive range of gestural possibilities that
enhance the effect of the moving corporeal body, involving the interface in a range of
dynamic movements.
The software assemblages to follow in this research speculate upon the notion that
deploying creative techniques for using commercially available devices in alternative
ways, opens a space for more indeterminate corporeal relations to arise. Here,
irregularities and serendipities of gestural movements between bodies and interfaces,
55
challenge preconceived assumptions carried over from HCI approaches to design, that
focus on affording control of a data system to a human user. In media art practice, the
computational alignment of the corporeal body with the micro-temporal accuracy of
machines may not be desirable in all cases. I take this question up further in the
following section by looking at gaming and its interfaces on the PlayStation console.
The Leap Motion’s marketing tagline ‘truly immersive VR begins with your hands’,
sells a perception of desktop VR that fully incorporates the ‘real world’ movement of
hands. When hands are the interface itself — as with a touchscreen or a gestural
interface rather than a peripheral device like a joystick or mouse — HCI research
understands that this results in more natural interaction, since perceptual as well as
motor skills are leveraged to afford control (Hutchins et. al. 1985; Thompson 2015).
The popular idea in HCI is that an interface should fade ‘into the background despite
its material presence’ (Jager and Kim 2008:45). However, if we deny the material
presence of the interface itself, then we are also assuming that the work that the
interface does is similarly ambient. In terms of the Leap Motion, the work this
interface does transposes the physical body to digital space, via a series of
computational operations that aim to mirror the corporeal in the technological virtual.
The assumption is that the body can be re-constituted with accuracy and precision in
digital space-time. Yet, this re-constitution involves a series of mathematically
calculated translations, that omit many of the body’s actual movements and gestures.
It involves a transposition, from the fleshy matter of the corporeal body to the clean
matter of pixels and voxels, which involves at best, an approximation. The extent to
which such approximations shift the data as it is transferred and converted, and how
this process intra-actively alters relations with the original physical gestures that are
materialised as digital information, requires further examination.
With the Leap Motion, the body is aligned with a data system through a touch-less
interface, so that digital matter might be visibly controlled. The wavelike gestures
leveraged by the ‘natural’ hand interaction become the ‘magical’ portal where digital
matter might be encountered as a mysterious ‘virtual’ other. In the industrial design
of 3D motion interfaces like Leap Motion, there seems to be a persistent idea of
controlling the invisible to invoke material effects. To explore the Leap Motion from a
performative and materialist perspective, we need to re-frame the assumptions of data
transmutation to reveal the goals of control. Further, we must acknowledge that the
56
translation of data from the body – such as hand tracking coordinates – are only ever
a partial re-construction of corporeality. Data, of all types will inevitably be elided,
while other information (such as positional tracking or hand orientation coordinates)
is privileged.
The ways that the Leap Motion re-constitutes corporeal data as digital information,
involves the use of its infrared sensor to distinguish a hand outline – using
electromagnetic radiation not visible to the human eye. Through calls made to the
Unity SDK, the body’s gestures are matched to numeric values, events which are in
turn mapped to data objects (which appear on screen as digital hand avatars). Numeric
values, chosen from a limited set of variables, are approximated vectors of the physical
hand’s position in digital space: they are not the actual continuity of gestural phases
the body goes through as it moves in a physical environment.
Yet, following Barad, this technical operation is not merely a method for measuring
and converting data. The notion of an apparatus as agential is a cornerstone of Barad’s
philosophy, and a direct challenge to conventions from classical science that place the
scientist at the centre of observation. Her critical posthumanist position gives
credence to nonhuman modes of matter as agential, where material arrangements of
apparatuses have the capacity to alter the materiality of bodies by materialising those
bodies differently. For example, in a discussion of the piezo transducer as a measuring
instrument to determine gender in ultrasonography, Barad shows that this device
enacts boundary-making practices toward the human foetus in utero, as the object of
measurement (Barad 2007:189-191). She argues that an instrument (such as the piezo
transducer), when coupled to an array of techno-scientific practices, produces a
57
material shift that alters meaning with each technical arrangement. The apparatus
occupies a dynamic nexus:
Importantly, apparatuses are not external forces that operate on bodies from
the outside; rather, apparatuses are material-discursive practices that are
inextricable from the bodies that are produced and through which power works
its productive affects. Apparatuses are phenomena, material configurations/re-
configurings, that are produced and re-worked through a dynamic of iterative
intra-activity (Barad 2007:230).
Barad takes a broader view of apparatuses outside of the instrumental approach often
associated with scientific fields. She understands apparatuses as ‘produced and re-
worked’ dynamically. Developing new performative techniques to conjoin the Leap
Motion to my experimental software assemblages, injects a dynamic interplay into
digital augments. As well, it allows for the body – ‘inextricable’ from the apparatus in
the moment of interfacing – to iteratively contribute to further modulations.
Before we further explore this approach, we need to inquire more as to the logic of
digital control manifest in devices such as the Leap Motion. This is most apparent in
the field of gaming, where Adrian MacKenzie (2002) has noted that the act of using a
gaming controller aligns the body with a data system:
Rather than converting structures into events, the real time animated computer
game seems to assimilate events to pre-existing structures, to select amongst
the possible events only those that can be processed in terms of translations of
polygons around the screen. Rather than real time play triggering events, the
very systems on which it relies seem to contain events within a strictly
controlled permutation of marks. There would be good grounds to argue that
there is no play here, or at any other ‘playstation’ (2002:159).
In juxtaposing the ludic potential of converting structures to events, against the actual
relation (in the PlayStation system) of assimilating ‘events to pre-existing structures’,
MacKenzie explicates the restricted way in which the corporeal body, treated as a
structure itself, is brought into alignment with the game world. Play is constituted as
58
While the act of aligning the body with hardware devices is a feature of all
commercially available controllers, including the Leap Motion, my research argues for
the need to creatively shift the body’s relation with controllers beyond such an
alignment. Artworks of the kind this research generates and those which provide
contextual dynamic examples of experimental practice in this field of MR, are not
closed systems that require completion, or that lead the participant toward specific
goals. Through a conception of the apparatus as a phenomenon that — partially and
selectively — re-constitutes corporeal flows as a more performative, dynamic ‘digital’
matter, it will be possible to further disrupt the conception of a seamless alignment
between body and computational machine.
The seminal artwork Loops (Kaiser, Eshkar, Cunningham, Downie 2001) highlights
some of the issues that trouble the idea that corporeality can be wholly transposed to
data space. Loops is an interactive artwork that began its life in 2001 and has had
several iterations. It takes motion capture data from a solo dance performance for the
hand by Merce Cunningham, as well as his recitation from a diary entry of a visit to
New York as a young man. It couples these with generative algorithms to create real
time choreographic and sonic adaptations of Cunningham’s hand ‘dance’ data. When
opened to the realm of the digital, the precision of Cunningham’s movement is motion-
captured as data and then encoded relationally using AI algorithms. Stamatia
59
Portanova (2013) has commented on the programmatic complexity of the physical and
digital interrelations of Loops, stating that:
The primary interest of the choreography therefore is that it situates the whole
dance where it should be and in the way it should be, that is, in Cunningham’s
body as one, unique realization in the execution of a program. … When
performing Loops, Cunningham himself became, in short, an eternal object, or
an abstract idea (Portanova 2013:102).
For Portanova, the algorithms that mutate Cunningham’s physical hand data, allow
for a playing out of this data as an ‘eternal object’ in computational space. Portanova
brings an interpretation to Loops in which she reworks the physical/digital duality as
instead a modulatory composition. There is a risk here, however, that her conception
of (mathematical) abstraction might see Cunningham’s corporeal movements give way
to yet another kind of disembodiment. Hence, we need to further inquire as to the
actual relations at work between abstraction and the body as they emerge and
reconfigure through artworks such as Loops.
Anna Munster has articulated the digital materialization of the data, captured from
Cunningham’s hand, as an iteration that varies. Generating an oscillating series of
temporary formations of the body in digital space enacted algorithmically, they
capitalise on proximity to one another and sonic rhythms (Munster 2006:179-80).
Cunningham’s dance is re-composed as a digital embodiment. However, rather than
considering code here as abstract (as Portanova does), Munster sees Loops as
deploying a sophisticated technological art in which code and corporeality are mutably
engaged. Such mutability resists categorisation as either disembodied or wholly re-
embodied in the data:
What is needed is an approach to the body that engages modulation as its operating
principle, inflecting the iterative movement of code as it ‘captures’ the mutable
corporeal body, and the ways bodies leave their ‘marks’ on code. Bringing this
understanding of the mutable engagement between the corporeal and the
60
The performance of Loops did not involve the continuing physical co-presence of
Cunningham: instead it captured Cunningham’s motion data and algorithmically
structured the dance from that. Since the presence of my performing body will be a
continuing refrain in the software assemblages in this research, I will briefly turn to a
practice-based investigation of choreography and AR that unfolds alongside a co-
present performer. Transmedia performance works incorporating AR techniques with
co-present human performers, such as the Crack-Up (2013)10 deploy motion capture
methods to virtualize the movements of dancers. Those movements are transposed to
‘performing agents’, via custom-designed software by John McCormick. This process
illustrates a way physical movements can be extracted to the digital and then
reincorporated to the physical again through a relationally vested alignment with a
dancer during the performance.
The question of how objects might emerge out of a relational event is addressed by
Brian Massumi through his explication of ‘semblance’ (Massumi 2011:18). As well, we
might apprehend this through the emergence of digital objects in a computational
network, such as those generated by my software assemblages. Drawing upon Susanne
Langer and Walter Benjamin’s notions of semblance, Massumi argues that semblances
are not simply static forms, existing on a page or wall devoid of vitality. For example,
in a motif of a leaf, we may perceive its leaf-ness not simply through a set of lines that
suggest the whole form, but in ‘the object’s relation to the flow not of action but of life
itself, its dynamic unfolding, the fact that it is always passing through its own potential’
(Massumi 2011:50). Massumi sees semblance as gifting a ‘vitality affect’ to abstracted
forms, a liveness that is never lost. Semblances are (vital) forms, encapsulating
affective forces in crystallised suspension. When we perceive a semblance, however,
we are not apprehending the empirically real, but a vitality affect that is immanent.
Semblances are virtual because the movement (or force) immanent to them does not
yet actualise. Massumi articulates semblance as the pure abstraction of movement:
suspended in the semblance, the vitality of the movement yet to come is nonetheless
still present. Massumi notes:
62
the “likeness” of an object to itself ... makes each singular encounter with it teem
with a belonging to others of its kind (the object as “semblance”) (2011:243).
In the context of the software assemblage, semblances are part of the field of co-
composition that, where, for example, gestural recursions can be used as a proposition
for re-assembly. Bringing Massumi’s proposition for the processual tension between
semblance and actualisation into the software assemblage, suggests a useful technique
for operating with relational shifts as they unfold in events. In the case of my
performances, folding semblance back into the MR space activates gestures without a
finite end, gestures that are incomplete and that move without being oriented to goals.
Such gestures allow relays of data and physicality to fold into one another, as they re-
assemble and relationally inflect. This understanding also resonates with Deleuze and
Guattari’s plane of consistency, where:
Arising through the specific conditions posited by any given assemblage, ‘lines of
flight’ might also be thought of as material relations drawn by movements, that are
composed through the flow of matter as agential. Highly potentialized, ‘becomings’
coalesce in unique arrangements that entangle matter – and semblances – in complex
meshworks of desiring production. The emergent relational arrangements proposed
by re-incorporating semblances to a moving flow of data as choreographic elements,
will be explored shortly in Tactile Light and Tactile Sound, however the same
conception resonates throughout the software assemblages in this research. Hand
avatar forms will emerge to intra-actively co-compose in the relational space between
63
my physical hand holding the Leap Motion interface, a computation system built in
the Unity SDK, and a living projection screen/tactile surface composed of wheatgrass
plants. Critically, these assemblages will explore the idea that, when arranged in a
relational field of movement, the digital hand avatars could be considered as
choreographic elements, unfolding relationally across a field of movement.
Cunningham’s improvisation with hand and gesture have certainly influenced my own
performative techniques. Yet, of significance for my micro-gestural techniques is a
frequently neglected film by Cunningham’s talented former student, Yvonne Rainer,11
whose Hand Movie (1966) stands out as an early exploration of hand choreography on
film. Rainer was in hospital and unable to dance, except her hands were capable of
movement. The resultant 8 minute film, shot on 8mm black and white film by
cinematographer William Davis at her bedside, is a unique example of the ways in
which constraints upon the body’s normal operation can be powerfully redirected into
new physical formings. Rainer’s temporary disability produced an entirely new
choreographic proposition, one that shifted the expected relation with traditional
concepts of dance, such as the premise that the whole body must be involved
(Lambert-Beatty 2008:178).
Within a fixed frame is a sole compositional element, Rainer’s right hand, situated in
front of a sterile white wall. As the dance begins, fingers twitch away from the palm,
then move to find one another, tentatively at first, then more boldly. Tension is applied
by each element to the next. Bunching together only to flick apart, we apprehend a
hand exceeding its regular actions, without goal or aim, except to explore its own
operability as a relational set. At times, it appears that the hand could be a group of
dancers, tumbling over one another in curves and lines. In a choreographic turn that
advances a critique of dance conventions, movements are executed without a pre-
determined, overarching structure.
It is worth pointing out that the choreographic approach gleaned from Rainer
produces an entirely different set of hand gestures than those normally associated with
the Leap Motion, apparent in the software assemblages throughout this research. The
rotations, twists, leans, pinches and stretches of the fingers and palm, performed by
my hand, go far beyond the usual stable of goal-oriented gestures that operate with
interfaces like the Leap Motion. The nuances of such movements have found their way
into Tactile Light and Tactile Sound, but are especially amplified in the Wild Versions
(chapter 3), whose computational modules via the Unity SDK were re-designed to
amplify micro-gesture. Through the software assemblages discussed below, we see
what open-ended possibilities might emerge through an approach that considers the
nuances generated through micro-gestures as a means of inflecting performativity to
an interface such as the Leap Motion.
Fig. 16. Tactile Light. Micro-gestures diffract across plants. Image: Simon Howden.
Fig. 17. Hand avatars blend with environment. Image: the artist
However, the digital augments were not figured as single hands to emerge as exact
replicas of their corporeal counterparts. Instead, the hand avatars emerged on the Fig.
18. Environment view. Hand avatars blend with wheatgrass structures. Image: the artist.
Fig. 18 (below). Environment view. Hand avatars blend with wheatgrass structures. Image: the
artist.
66
wheatgrass screen at scales that were either much larger or much smaller than ‘life’
(Figures 18 and 19). With scale manipulated and the angles of the hand models
skewed, each hand was then displayed across the wheatgrass screen, conjoined in a
visceral architecture with the next. In this way, the data of a single hand moving was
translated into a multitude of augments, with the results in the projected image
sometimes resembling their hand avatars (Figure 19) and at others, pushed to
abstraction by the diffraction of projector light on the wheatgrass itself (Figure 20).
67
Fig. 21. Grass lattice alongside my performative gestures. Image: the artist.
phenomena that is optically apprehended by the human eye. For Barad, the patterns
of interference diffraction makes as it moves through a medium, are marks of the
intra-active process enacted by one type of matter moving through another. In this
way, diffraction differs from other kinds of optical paradigms such as reflection, where
matter and its optical effects are held at a distance; for example, the metaphor of
‘holding up a mirror’ suggests already distant relations between subject and object.
Exploring Barad’s insight regarding the diffractions of matter, offers a perspective on
the co-emergence of augments, corporeal bodies and plant multiplicities in my
software assemblage version of MR, where an apparatus is a phenomenon that not
only produces intra-actions, but is an integral part of ‘the ongoing intra-activity of the
world’ (2007:73). As it passed over the living screen, light formed diffraction patterns
between my physical movements and the digitally generated hand avatars, as they
moved through each other across the wheatgrass surface. Because of the texture of the
wheatgrass itself, it was difficult to pick out the individual 3D hand models. Rather,
what was apprehended was a field of moving hands on a surface that was also field-
like. The human hand augments – emerging relationally rather than as individual 3D
hand models – were now only partially recognisable as physical hands. Projecting onto
wheatgrass diminishes digital fidelity and graphic accuracy, and prioritises the
movements of intra-active phenomena. As the light from the projection coalesced
across the wheatgrass screen, forms modulated in transient outlines across its surface.
Phenomena were liminal in the luminous glow of projection, in a scramble of layers
that blurred and muddled recursively. For the performer, relational registrations were
happening all around, in the flow of the event, with the screen only catching a small
portion of the affective vitality of the system. A habitual familiarity with my physical
hand was undercut by the emergence of unusual material-discursive structures. This
highlighted an experience of the body in the act of temporarily re-configuring itself
through recursive relays of semblances. Observed on the grass screen from visually
obtuse perspectives, the hand as a discrete entity – either physical or digital – is
problematized.
In Tactile Sound, I once again spent six weeks growing a wheatgrass sheet. Placing it
on the ground – rather than hanging as in the previous piece – two piezo sensors were
added at the base of the wheatgrass expanse. In the performance, I sit in the
wheatgrass and activate the piezo discs by mechanical pressure (Figure 22). Capturing
69
their analogue signals in real time using the software Logic Pro X14, delay and feedback
were added as I used hand gestures to work with the sound that emerged. The
mechanical pressure of my hand on the surface of the grass, generated the initial
analogue signal, however as the convolved digital signal reached my ears from Logic,
I needed to craft a response in the moment: the intra-actions between my hand, grass
and data, coalesced in to a relay that I had little control over. Performing an array of
gestures (such as swishes, shaking, flicking), my hand moved the analogue audio
signal around, as I responded to the processed output and generated new analogue
inputs. (Figures 23, 24 and 25).
Fig. 22. Sitting on the wheatgrass sheet, activating piezo sensor and Leap Motion in tandem.
Image: Simon Howden. Full documentation available: https://rewawright.com/2017/05/02/live-
performance-tactilesound-jan2017/
Figs. 23 and 24. Swishing the grass. Movement left to right. Image: Simon Howden.
Fig. 25. MR screen capture from Unity of hand avatars. Image: the artist.
of data through an apparatus – a point that relates to my lack of control over the
analogue signal coming out of the piezo discs described above in regard to Tactile
Sound. Here, it must be pointed out that Barad has completely re-worked the concept
of causality. Hers is not the causality of classical science. Rather, she has opened
causality to a fluid quantum understanding that validates an unpredictable dynamics
for all bodies or materialities (Barad 1996:172). ‘Scientist’, researcher or artistic
performer must all be included as a phenomenon in part generated by an apparatus
(2007:32), as much as an apparatus itself is, integrally, a phenomenon that is inflected
by these subjectivations (2007:247). Atoms and other sub-atomic phenomena that
constitute the ‘flows’ of matter are not simply objects of study, as in Newtonian physics
(Barad 1996:169), but dynamic agents in the production of knowledge. Applying this
observation to digital augments, we might say that the dynamic intra-actions that
materialise human and nonhuman phenomena in Tactile Light and Tactile Sound
emerge through differentiated flows of matter, as it diffracts through matter.
For example, an instrument such as the Leap Motion – when configured in an art
assemblage – materialises data differently than in a commercial/industrial
configuration. Engaged with unpredictable forces (such as sonic signals, projection
light) and motivated by unusual hand gestures that enhance its tactile potential, the
Leap Motion is re-purposed through performative techniques and through a
diffractive approach. Diffraction here functions in several ways: materially – for
example through the projection light hitting the wheatgrass surface in Tactile Light,
or the sonic signal passed through the wheatgrass, converted to Logic X Pro, and
further modulated with my hands in Tactile Sound15; and metaphorically – as an
antidote to the prevalent assumption (discussed in chapter 1) that digital space is a
reflective domain that should capture and wholly re-compose the physical world.
In the MR software assemblages I create, digital augments are not static objects with
a pre-programmed range of movements. This approach is made possible by using
hardware devices such as the Leap Motion as a dynamic threshold to achieve a more
performative mode of interfacing – not simply as an instrument that allows different
types of data to make technical connections. As I will explicate in the course of this
research, this materialist understanding of performative interfacing operates to lure
the digital augments into different kinds of relational movements with corporeal
bodies and plant multiplicities. The two related assemblages described above,
72
underscore the idea that the software assemblage iterates differently in variable
experimental situations, in a technical design that is speciated: both iterations share
code modules, hand controller elements, and the same species of living plants as
surface or screen. The individual elements of these software assemblages were further
speciated through the inclusion of different types of sensor (such as piezo sensors with
the Leap Motion in Tactile Sound), a re-configured wheatgrass sheet as screen (as in
Tactile Light), as well as a shift from projected light (Tactile Light) to LCD display
(Tactile Sound).
This speciation prevented the design of the artwork from solidifying, since shifting
relations within the arrangement encouraged the performer (myself) to resist the
temptation of forming habitual responses with the nonhuman elements in the
apparatus. As relays of data and the corporeal emerge and compose together, the
relations between the physical and the digital world change. These shifts of the
relation itself make possible the co-composing between the digital hand avatar and its
physical companion in the software assemblage ecology. In both Tactile Light and
Tactile Sound, the performer observes and modulates with the recursive relays, as cues
for an improvisational choreography. Considering the field of movement generated by
the hand avatars as an experimental choreographic arrangement, opened these
performances to oscillations across the corporeal and the digital space. My performing
body emerged intra-actively as a phenomena – on and off screen – in a tangle of
provisional states that oscillated with data and the organic living wheatgrass. Meeting
at different scales, magnitudes and thresholds, these performances established that
data, the body, and living plants could be productively investigated in an alternative
approach to MR that would loosen digital augments from their conventional position
as informatic overlays.
Partly, this phenomena was due to the limits of the Leap Motion interface, which is
designed to only track hands. However, as well, the polygonal models were closed at
the wrist, and this accentuated the effect of the cut. This issue was masked in Tactile
Light via the relationality of the augments, but was viscerally manifest on the LCD
display of Tactile Sound (Figure 26). The issue was not one of simple aesthetics, since
the chopped hand was inhibiting the capacity of the digital avatars to intra-actively re-
configure in emergent co-compositions that might challenge perceptions of isolated
augments and their indexicality. This presented a significant problem for my gestural
approach since it seemed that there was little or no modulation of the semblant forms
of the digital hand avatars with one another, but only of the signal as it was sent
through the embedded piezo sensors, or as it was modulated by my physical gestures.
The severed hand had, literally and figuratively, sliced off any potential for more
complex recursions. It became clear that using a human hand model presented a
limitation. A more fluid motif, open to recursion, was needed. Additionally, the
physical hand in combination with a matching digital avatar, could be construed as a
‘presence metaphor,’ one of the facets of Milgram and Kishino’s taxonomy I had earlier
critiqued.
Fig. 26. On the LCD display, the chopped hand in stark relief. Image: the artist.
In chapter 3, I will discuss the impact of choreographic improvisation in the ‘wild’ and
how this led to the complete removal of these figural hand models in favour of a
fusional hybrid augment that human corporeal forms with abstracted representations
of plants found in the natural bush ecology that was the setting for the Wild Versions
(2017). There, we will examine how shifting the digital hand avatars away from
74
Thousand Plateaus (1987), it is also aligned with stratification. This marks a subtle shift from
Deleuze's earlier work on the concept. Bonta and Protevi explain: ' the actual is the aspect
complex systems display when, in a steady state, they are locked into a basin of attraction. Actual,
stratified systems hide the intensive nature of the morphogenetic processes that gave rise to them
... . It is as if the actual were the congealing of the intensive and the burying of the virtual'
(2004:49).
6 Following Susanne Witzgall, it should be noted that assemblage, as articulated by Deleuze and
Guattari, and apparatus, from Barad, are quite similar constructions. Barad’s formulation is
underscored by a conception of matter as subatomic (from quantum physics), giving rise to the
notion of intra-action as the processual relation that generates new quantum and physical
phenomena. Deleuze and Guattari were very interested in complexity theory and the molecular,
and this interest has rubbed off on their conception of the machinic assemblage as articulated in
a Thousand Plateaus.
7 Visitor reactions to Levin et. al. Augmented Hand Series from the Cinekid Festival Amsterdam
created by Kim Vincs, John McCormick, Steph Hutchison, and Alison Bennett, amongst other
75
improvisational techniques that used everyday action rather than classically vested manoeuvers.
Like her teacher Merce Cunningham, Rainer was inspired by chance operations, eschewing the
traditional models of choreography found in classical dance in favour of movements that were
politically charged and experimental. Repetition of processual actions replaces conventional pre-
formatted dances. In Rainer’s improvisation techniques, performers draw on a repertoire of
unfamiliar gestures and movements, rather than repeating a model conceived in advance by the
choreographer (Rainer 1974:87).
12 Rainer describes her choreographic method: ‘Improvisation, in my way of handling it, demands
a constant connection with some thing – object, action, and/or mood – in a situation. The more
connections that are established the easier it is to proceed’ (Rainer 1974:299).
13 In chapters 3 and 4, I explore Haraway’s concept of diffraction in greater detail, as well as
CHAPTER 3
Attending to the relations that oscillate matter with materiality in the software
assemblage has generated artworks that reveal a more performative side to digital
augments than is conventionally manifest in commercial/industrial and computer
science versions of MR. In this chapter, attention will be paid to the potential for
affective relations between nonhuman forces, leaning toward signal modulations
between digital augments, data system, infrared signal, and plants. Embarking on an
analysis that traces entanglements of matter and materials in three post-studio
software assemblages, this chapter will follow my practice of performative interfacing
as it moves from the established indoor situations of Tactile Light and Tactile Sound,
through to less controlled outdoor environments, where wider ecologies will be
utilised as expressive spaces. Working alongside natural ecologies, I extend the ways
in which living plants might become agential forces in the software assemblage: firstly,
in the Wild Versions, a mobile MR kit traces an affective engagement with a natural
ecology; and, secondly, in the two Tactile Signal performances, plants are engaged as
signal producing bodies that I co-compose with.
In the Wild Versions – made in the unpredictable environment of the bush– I explore
the potential of a post-studio approach that nests the software assemblage in a wild
ecology. In the Tactile Signal performances, bio-electrical signals emitted by plants
are incorporated as agential elements that co-compose the sonics of the performance.
In the field of augmented audio, a sonic augment is broadly defined as an artificial
sound added to a more direct source (Cohen, Aoki, Koizumi 1993; Mariette 2013).
Through the Tactile Signal performances, I will be offering an alternative approach
where audio as augmentation is generated through the analogue bio-electrical signals
produced by plants, digitally converted by a computational network. I will be
examining experimental musician Miya Masaoka’s practice of utilising plant bio-
electrical signals in performance via a Body Area Network (BAN), where sonics that
77
emerge from plants are able to be modulated by the human body. I will be drawing on
Masaoka’s techniques for working with the amplified bio-electrical signals, suggesting
it as a way to extend current approaches to augmented audio in MR.
One of the new configurations explored in my work with the software assemblages at
this stage of the research, is a head mounted permutation of the Leap Motion interface
where we actually look through its infrared camera sensors. In this configuration,
digital augments passed from the Unity SDK are composited in real time on the
distorted, grayscale picture plane generated by the infrared camera’s image stream:
instead of the coloured pixels of the video stream, we now see a heat-mapped
rendering of the physical world. The significance of this interface permutation for my
research proposition is that the infrared camera view is a stark contrast to the colour
video images provided by the webcam stream, used exclusively until now in my
previous software assemblages. It affords a markedly different view of the subjects
framed in the display window and will be used to question the need for a clear
informatic window which, as we saw in chapter 1, is a standard AR/MR visual practice.
Before setting off on an exploration of new techniques for performative interfacing,
the following section will examine the final iteration of the software assemblage in
which the technical elements of the interface are entirely handheld. In the Wild
Versions, augments were released into a natural environment, raising the more
general relations of how media and organic elements might work together in a more
affective configuration.
While MR experiences are most often set indoors – with devices tethered to
computational networks and other instruments/interfaces – there is also a growing
cadre of techniques for use outdoors, operating in an expanded field of movement.
Already noted in chapter 1 were commercial and artistic examples from mobile AR,
using wireless networks to afford the geo-location of augments. In such examples, a
human experience of an environment unfolds through a cartographic link – formed
via GPS – with the physical environment, approached as a data object that is subjected
to procedures of mapping. In my approach where augments are thought as intra-active
and performative, it did not seem especially useful to follow cartographic and pre-
determined models, since this did not encourage emergent augmented relations.
78
My approach references late 20th twentieth century site-specific art more than
computational practice. Emerging in the outdoors, MR in this expanded field could be
approached as a post-studio practice that connects data networks to existing ecological
materialities already present at the site. Developed through the thinking and making
of many artists since the 1960s – but notably in the work of Robert Smithson and John
Baldessari – post-studio practices in fine art de-privilege the object, through the act of
producing work at a location determined by the artist, outside of the gallery system
(Buren and Repensek 1979; Ferguson, Tucker and Baldessari 1990). Working
ephemerally with geological and organic matter, Smithson developed processes of
material re-assembly that intervened at remote sites, away from the reach of an art
audience. As art theorists have argued, seminal works such as Spiral Jetty and
Yucatan Mirror Displacements 1-9 were known largely through their documentation,
not as objects (Kwon 2004; Housefield 2007).1
Immersed in the ecology of a bush reserve, a digital camera frames a wide shot and
awaits movement. Slowly, a hand comes into frame. It is swiftly followed by a
multitude of hand avatars; augments that are conjoined to the initial (physical) hand
by a software-based tracking system (Figures 29-32). Emergence is between digital
hand avatars and physical hand, as mediated by the ‘signal inertia’ produced by a
webcam stream (a phenomena that will be explored further shortly). There is always
co-emergence here. Entanglement describes the relational trajectory of all these
different hands, as they emerge through the computational system as well as in the
physical world. Hand positions and gestures were developed with attention to
79
potential emergence in the wider field, not simply as elements within a display frame.
Fig. 28. A location shot at A.H Reed Memorial Park, Whangarei. Image: the artist.
80
Working with a small, self-designed, mobile MR system (Figure 35) brought the
software assemblage to an uncontrolled location with its own agential presence.
Recording of the performances was achieved via the iPhone 8 Plus screen record
function, with the device running the Unity Remote app, connected directly to my
laptop which was playing the Unity scene live. Recording was silent, however in the
studio I overdubbed sound generated from the bio-electrical signal of plants at the site.
The process for generating and capturing of bio-electrical signals will be closely
examined later in this chapter, so will not be explicated in this section.
Spatial expressivity has another aspect: the relational one. The ecological space
inhabited by an animal expresses, through the arrangement of surface layouts,
the capacities it has to affect, and be affected by, the animal (2007:103).
Fig. 29. Still image from Wild Version 1. Image: the artist.
He traces spatial expressivity for its influence on behaviour, as a relational field that
can be considered an affective force of the nonhuman. Considering this with respect
81
Fig. 30. Screen image from Wild Version 2. Image: the artist.
Fig. 31 and 32. Sequential screen images from Wild Version 2. Image: the artist.
83
The process of working with the site as an expressive space began before the recording
of these performances, and included the photographic studies made of trees that
would be later used to create digital hand avatars. I began to see how the hand avatars
could visually enfold aspects of the natural ecology, so I crafted graphics based on
plants found in the local area, using the Speedtree Modeller.2 Despite being based
graphically on plant forms, the digital hand avatars are only quasi-figural: as surface
textures in the act of relation, these graphical surfaces tend to abstraction. My concept
was not to simply re-produce a modelled replica of vegetation as a hand. Rather, it was
to allow the hand avatars to resonate as an expressive element that could be read as a
semblance of a physical hand, as well as of the environment (such as the tree fern
avatar with actual tree fern in background, Figures 33 and 34). Material and
choreographic relations emphasise co-composition between data system and physical
hand, but also make apparent the living plants in this environment by way of the
graphical hand models.
Technically, the plant forms based on the digital hand avatars are two-dimensional
image textures; a graphics layer attached to a physics model.3 As such they are
completely static until activated by the physics model: it is the connection these
surfaces make through code programmed in C# that generates the conditions for their
emergence and entanglement. The hand models are connected in a specific way,
through a technique that I have developed across this practice. In this technique,
multiple Leap Motion hand controllers are placed with overlapping three-dimensional
coordinates, in one Unity scene. This produces a temporally shifting digital topology
where localized points coincide with one another, then visibly entangle. Emergence, in
the digital hand avatars that sit across these three-dimensional coordinates, generates
the potential for relational movements with the augments as choreographic data
objects. In motion, the hand avatars emerge in tandem yet at obtuse angles and
framings. Attention was also given to forming temporary conjunctions between the
fingers and palm. The entry point, angle, presentation and position of the physical
hand, sets many of the conditions for the emergence of the augments. Pressing against
and through one another, they spill across the surface of the webcam stream, drifting
in mid-air and seemingly unrestricted by framing conventions.
84
Fig. 33. Screen capture from the Wild Versions 4. Image: the artist.
Fig. 34. Screen capture from the Wild Versions 4, showing light diffractions.
trouble discerning which set of relational coordinates to feel its way toward next.
Physical moves are tentative gestures, as I attempt to apprehend the position,
orientation and scale of the micro-temporally generated hands, and craft a durational
response.
My hand is responsible for the initial gesture that is tracked by the Leap Motion SDK.
However, perceptually and then performatively, the opposite is true for my outdoor
MR system, which had inserted the phenomena of webcam delay into this software
assemblage. The parsing back and forth of data at less than optimal bus speeds
produced the conditions for ‘signal inertia’, caused by a delay in the processing of the
webcam stream. This meant that my real hand appeared on screen, via the webcam
signal, about half a second after the digital hand avatar, since the micro-temporal
tracking provided by the Leap Motion SDK effectively ‘beat’ the slower webcam image
to Unity. In the recorded performances, a visible gap emerges between my physical
hand and the digital hand avatars.4 My response to this temporal mismatch was not to
attempt to return with a more powerfully designed mobile system. Instead, I saw the
opportunity to approach this delay as an affective force exerted by technical elements
that could be explored for its expressive potential. To facilitate unexpected intra-
actions between the corporeal and digital in the Wild Versions films as iterative events,
I worked with the materialities of signal. Suspended in the updating flow of webcam
data, the conjunction of digital hand avatars, physical hand on the screen, and hand in
physical space, not only produced a more relational assemblage than in Tactile Light
and Tactile Sound, but also investigated the affective potential of signal itself.
My improvisations worked with the signal inertia as a nonhuman yet expressive force,
where the delayed image stream came to mediate the digital hand avatars as
choreographic data objects. Relationally moving to screen-based visual cues, such as
position, gesture, and movement, my single physical hand, improvised along with the
digital multitude. Real time tracking was disrupted with digital avatars and images of
physical hands occupying the same screen space, yet persistently temporally separate.
This gave the visible impression that the digital avatars had an agential presence. In
this way, the contingent emergence of signal inertia became a compositional element.
The play between relational movements as they emerge between the corporeal body,
86
the data system, and the natural ecology provided an opportunity to allow the
augments outside a synchronised tracking system.
Fig. 35. Sketch of my homemade mobile system: webcam, laptop, iPhone 8 plus (the 'frame')
running Unity Remote app. Image: artist’s workbook.
Mixed Reality (MR) refers to the general case of combining images along a
continuum which ranges from purely real (unmodelled) data, such as raw video
images, to completely virtual images, based on modelled environments.
(Milgram 2006:1)
Such a definition occludes the significant role of signal in mediating an image stream,
87
In early 2017, the Leap Motion company introduced a custom head mount whereby
the gestural interface could attach to the front of a Virtual Reality Head Mounted
Display (VR HMD)6 to be utilised as a camera one could look-through. The use of a VR
HMD for non-immersive display conveys a mix of camera stream – a framing which
also includes hand gestures – and signal (Figure 36). While in both handheld and head
mounted uses of the Leap Motion, infrared data is sent to the Unity SDK, in hand held
use the signal that sends the data is masked out. In previous software assemblages,
digital augments were transposed as elements within a clean looking webcam stream,
not on the actual infrared stream to which they were tracked. Since the infrared signal
was invisible, the augments rested against the webcam image stream, in a screen space
that still resembled physical space. In this permutation, that has materially changed.
88
Fig. 36. Screen capture from Leap Motion/Vive display. Image: the artist.
However, when incorporated to a VR HMD, the Leap Motion’s camera sensors can be
accessed by the Leap Motion API and passed through the HMD’s viewport.7 Using the
image pass-through feature renders the image stream from the infrared sensor as a
visible background plane. Operating at a threshold normally imperceptible to human
vision, infrared radiation requires an interface to materialise its electromagnetic field
as visible light (Figure 37). Now, digital augments emerge on this ground/field of
electromagnetic radiation. Through the Tactile Signal performances explicated soon,
I will suggest that this signal activates electromagnetic radiation as a disruptive
threshold through which to diffract the visual plane.
Fig. 37. Diagram of the Electromagnetic Spectrum. Credit: NASA’s Imagine the Universe.
89
Important for the Tactile Signal performances, is that the point-of-view afforded by
the HMD cannot be considered as a ‘window’ connecting some artificially purified
notion of ‘raw’ unmodelled data (Milgram 2006:1). Looking through the Leap Motion
at the infrared signal disrupts a clear correspondence between real and virtual worlds,
interrogating the necessity of this as a design feature of MR. The actual pixelated plane
of the infrared signal is replete with diffraction patterns caused by the passage of
electromagnetic radiation through the Leap Motion device. As captured by the data
system, the infrared signal is a digital record of the diffraction of heat as it travels in
electromagnetic waves around objects in the physical world. When the infrared signal
arrives at the surface of the HMD display, it is akin to a visual archive of the
enfoldment of heat as digital matter, which is then further diffracted as it meets the
digital augments.
Furthermore, the passage of the infrared signal, where heat is converted to image
pixels, beckons an extension of the materialist analysis applied to the software
assemblage so far, to include the disruptive interference patterns produced by
‘signaletic’ media. Bodil Stavning Thomsen (2012) describes the qualitive elements of
signaletic media as ‘the ability to create affective encounters within the folded
operation of the signal’ (2012:8). Such a capacity to generate new encounters between
bodies and data at the level of signal now becomes an aspect of the software
assemblage formulation, since it affords an understanding of the entire augmented
space including its field as performative materials. The infrared signal renders its
image stream in a visceral materiality that is not often associated with MR: such a
signal needs to be analysed as a nonhuman agential force that can become affective
within the software assemblage.
interrogating the shifts that signal produces as it emerges and entangles with my
performing body.
Through the Tactile Signal performances, I will explore the infrared signal as
generating a highly problematized zone of complex intra-active movements and omni-
directional material relations. As a vector through which we might interpret Barad’s
conception of apparatus as it impacts on augmented materials, I will be using the head-
mounted permutation of the Leap Motion to co-compose a differentiated state of
embodiment for the performer, who becomes the ‘camera’. I will also be offering an
alternate view of sonics in MR, by utilising the bio-electrical signal coming from living
plants as a form of augmented audio. In these performances, I use my hand gestures
to diffract two types of signal through different designs of the assemblage. Firstly, in
the Yucca Relay performance, I use my hand gestures to generate digital augments in
response to the bio-electrical signal from a living yucca tree.
The world visible through the HMD implicates the performer in a mix of realities
91
where the ‘real’ is no longer rendered via a video image stream as an analogue of
human vision. Attaching a VR HMD precipitates a profound perceptual shift in my
own sense of embodiment while intra-acting with the digital augments. To convey a
visual image to the wearer, the HTC Vive uses stereoscopic vision delivered through a
screen display with a refresh rate of 90Hz. However, the Vive is not only a stereoscopic
apparatus: it also uses a laser-based positional tracking technique, whose sensors
transpose body movements to digital space.10 However, I use the VR HMD in a non-
immersive system, with access to a mixed digital-physical world view through the Leap
Motion’s camera. Yet this camera view is distorted and grayscale, replete with
disruptive pixels that are certainly not imaging the world at 90Hz. Here, material
collisions – such as between the disruptive static of the infrared signal, the crisp digital
augments, and the human hand materialised as signal (Figures 40 and 41) – can be
understood as different relational thresholds of matter enfolded in the temporality of
the infrared signal as it passes through the Leap Motion/Vive apparatus.
Repositioning the Leap Motion from my hand to my face, also brings into sharp relief
the difference between my experience of the performance and that of an audience.
While this split is always present, since there is continually a performer emerging via
the performance itself as well as whatever subject positions are affectively present for
an audience, the HMD amplifies this relation. For example, in the Agave Relay
performance – discussed in detail later in this chapter – I frame the HMD camera view
as the digital augments materialize and are modulated through my hand gestures with
the plant’s bio-electrical signal. In this process, I am entirely implicated as part of this
apparatus. At the same time, the HMD view is sent to a large external screen display,
perceived as a video rendering by the audience. Thus, at least two parallel perceptual
viewpoints are at play - one experienced as embodied by the performer as they
negotiate the tasks the apparatus requires, and one perceived by the audience as a
screen rendering of the embodied process. In such networks, MR is more than what
happens on the screen. Rather, the ‘mixing’ that happens in between relational forces,
generates an emergent MR, across the interferences of the nonhuman organic, the
signaletic and the corporeal. The affective movements of the corporeal, organic, micro-
temporal, and so forth, have the capacity to precipitate new senses of embodiment for
the performer.
92
Figs. 38 and 39. Signal path diagrams for Yucca Relay and Agave Relay.
An important processual shift outlined in this chapter, is the transition from treating
plants as material (such as in chapter 2) to involving the bio-electrical potential of
plants as co-compositional in an assemblage. For example, in Tactile Sound, sonics
were generated by mechanical pressure I placed on piezo sensors using gestures. While
the piezo sensors were embedded in the wheatgrass, this technique did not record any
actual bio-electrical signals: it simply captured mechanical pressure as sound. By
contrast, the two Tactile Signal performances now under discussion activate plants as
signal producers. The significance of an approach that considers plants as more than
materials, lies with how a plant’s bio-electrical signal might modulate the digital
93
augments. It will be argued that this modulation challenges the informatic overlay
approach as it has been applied to augmented audio.
To offer a productive alternative, I examined the concept of the Body Area Network
(BAN). In engineering, BAN research, investigates the human body as a transmission
channel for data, where the bio-electrical charges that circulate in fields around the
surface of the skin are harnessed as energy sources able to power wearable devices
(Zimmerman 1995; Fujii and Okumura 2012). In the performance practice of laser
koto musician and BAN interface designer Miya Masaoka, we find an exciting
94
experimental trajectory that modulates plants, data and the human body. Since the
1990s, Masaoka has worked intensively with plants as co-composers, creating
ensemble musical pieces for live performance. Her initial plant-human interfaces – a
type of BAN established to sense physiological electrical data near the body – were
developed with BAN pioneer Tom Zimmerman. These incorporate body-plant-energy
networks to generate live musical performances as well as scored compositions.
Clearly, Masaoka considers the responses and behaviour from plants as an element
that affectively co-composes the sound piece, citing their ability to emit different
signals according to their environment. Masaoka’s creative use of BAN’s can be
thought of as ways to pose provocations to the practice of augmented audio. I have
rethought some of her artworks in the context of performing augmented audio by
introducing issues of signal flow and modulation to the Tactile Signal performances
and to Contact Zone (in the next chapter). This sonic flow is not under the full control
of the performer, yet it can be adeptly modulated using both carefully placed tactile
gestures and subtle body movements. The bio-electrical signal of the plants could be
considered as a kind of micro-impulse, a bubbling ground of voltage that conveys a
mode of nonhuman affect. Through attention to the micro-impulses in a modulating
bio-electrical signal, aural perception shifts focus from the musical notes played as
expressive of the intentionality of the performer, toward the sonic forces manifested
by the impulses emitted by the plant. Mobilising Masaoka’s approach to assist in the
development of a generative sonics of augmentation, arrived at through the
articulation of real time signal flow, converges both the gestural movements of my
physical hands, and digital augments as they pass through the software assemblages,
95
Figs. 40 and 41 . Screen images via HMD in Yucca Relay performance. Image: artist’s workbook
A large yucca tree resides in a backyard in Enmore, Sydney. For the purposes of this
performance, it has been fitted with electrodes attached to a MIDI Sprout sensor
whose data will be sonified using Logic Pro X. I approach, wearing the Leap
Motion/Vive HMD: through a point of view shot, the tree is framed. In advance of the
performance, a set of choreographic parameters were determined, establishing a loose
organisational structure for how this software assemblage might unfold. I would wait
for a bio-electrical signal emitted by the yucca. Then, I improvise a gestural response
96
that activates digital augments via the Leap Motion/Vive combination (Figures 40 and
41). This response would be enacted while holding an ultrasonic microphone, whose
transducer captures the normally inaudible sound of my hand, as it moves through the
air in front of the tree (Figures 42 and 43). Then, the process repeats until the end of
the performance, itself durational depending on the level of bio-electrical activity from
the tree and my threshold of attention with the system.
Since the tree was operating on a phenological time scale, waiting between five and ten
seconds for a sonic emission was normal, sometimes up to fifteen, a process over which
I had no control.13 Once the yucca emitted a signal, intra-actions between body, tree,
and data were open ended and durational. I tune carefully to the sonified bio-electrical
signal as to sculpt a gestural response. Holding the ultrasonic microphone in my right
hand (Figure 42) does not obscure the tracking system of the Leap Motion/Vive
apparatus, which mostly registers my hand's outline. This means I can agitate the
ultrasonics that arise from my hand gestures, in response to the yucca's signal.
In previous software assemblages in this research, the graphic models applied to the
hand avatars were added to a structural 'mesh' in the code module.14 The mesh was
based on a human hand form, and the graphic model was an attached component.
Experimenting with shifting possibilities in Yucca Relay, I choose to remove the mesh,
instead attaching the graphic models to a line renderer.15 My newly designed meshless
avatars do not track the outline of the physical hand, but instead are attached to the
co-ordinates of my palm and wrist. Now, digital augments react more fluidly, rather
than being attached to its avatar as a condition for emergence (Figures 44 and 45).
Since this new technique also prevented my physical hand from having a figural digital
companion to track during performative interfacing, it precipitated a shift in my
approach to that process as well. My hand gestures became more mobile, since I
needed to worry less about breaking the tracking due to excessive movement. My
embodied perception during performative interfacing was further challenged, since
the meshless avatars were less controllable than in the previous module. Although I
had only one line renderer attached to each hand, the Leap Motion was sending many
more coordinates to the Unity SDK than the line render could process. This resulted
in lines emerging in quite random patterns of interference across the Unity scene itself
(and, of course, in my HMD). The meshless approach and its challenges will be further
explored in the next chapter, during my solo performance portion of Contact Zone.
97
Fig. 42. Yucca Relay's performative interfacing. Fig. 43. Tree with white electrodes.
Flux, threshold and scale were critical operations in this performative interfacing. For
example, as I reacted to the Yucca’s signal, gestures from my hands sketched graphic
flows in digital space, that in the physical world troubled electromagnetic frequencies
picked up by the ultra-sonic microphone. Or, as my head moved with the weight of the
HMD, the framing shifted toward a new orientation, tracking broke with the Leap
Motion SDK, sending augments across the screen and out of view. Or, the ultra-sonic
microphone, capturing interference from the MIDI sequence generated by the tree’s
bio-electrical signal, relayed those wave forms as an auralization of normally inaudible
frequencies. Digital augments were bright and green, enfolded to the dark materiality
of the infrared signal. A multitude of signals – infrared, bio-electrical, and digital–
circulate through this software assemblage, in contesting relays that are aural, visual
and embodied. In this situation, I negotiated a response to the bio-electrical signal as
it emerged, with the added perceptual challenge of remaining attentive toward the
shifting frame of the point of view shot from the HMD.
98
Fig. 44. Tactile Signal: Yucca Relay performance, meshless avatars. Image: the artist.
Fig. 45. Tactile Signal: Yucca Relay performance, meshless avatars. Image: the artist.
99
While Yucca Relay had produced a situation where a living tree became a co-
composer, still, the piece was limited by its ‘call and response’ format, where
essentially two parallel systems emerged alongside one another. To further explore
opportunities for performative interfacing with plants and augments, I needed to
design a system that was interwoven rather than parallel, and operated with the
recursive relays generated by the data and signaletic materialities in a modulating
meshwork: a system that would allow for a greater circulation of patterns of
interference between digital augments, sonic signals and my performing body. The
result of that endeavour is detailed in the next section.
An agave plant sits in a gallery space, emitting bio-electrical signals while waiting for
the arrival of a performer. I arrive, wearing the Leap Motion/Vive apparatus. Sitting
on a chair facing the plant, my tactile hand gestures are used to shift the frequency of
the bio-electrical signal (Figure 46). Agave Relay came about through a desire to
interweave digital augments, hand gestures and the bio-electrical signals from living
plants. I used tactile gestures, touching the leaves of the agave plant to modulate with
its bio-electrical signal, which was networked to the MIDI Sprout capacitive touch
sensor, and the Logic Pro X sound design programme. As I performatively interfaced
with this plant-body circuit, augments emerged in the display of the HMD, attached
to my hand gestures as I manipulated the agave’s leaves. These gestures – described
in the section below – modulate the bio-electrical as it is generated, and also cause the
digital augments to emerge. Investigating the material process of modulating a bio-
electrical signal as a mode of audio augmentation in MR, I generated a method that
posited an alternative formulation to the computer science /engineering/commercial
practice of layering a ‘realistic’ sound on top of a visual augment to give it a more
convincing virtual presence. Imbricating both data and signal, this software
assemblage requires that I co-compose using processes of recursion and modulation.
Fig. 46. Agave Relay, screen capture from video. L-R:HMD view/environment view.
Helpfully, the biological structure of the agave plant is suited to such tactile
improvisations, since its leaves are larger than the human hand and of a thick cellular
consistency that can withstand some manipulation. Below, I show via images the
gestures as different techniques. Two images are placed side by side: first,
photographic image of my hand shows the gesture enacted on the agave plant; and,
second, a screen capture documents the MIDI data generated by the gesture.
Technique 1. Holding the base of two leaves close to the attached electrodes with both
hands, generates a higher pitch (Figure 47), imaged as a flat line of tone that is held
over time (Figure 48).
Fig. 47. Technique 1. Image: Simon Howden Fig. 48. Technique 1. MIDI. Image: the artist.
101
Technique 2. Folding the base of a leaf backward and forward quickly (Figure 49),
shifts the pitch up and down over time, imaged as a ripple-like pattern (Figure 50).
Fig.49. Technique 2. Image: Simon Howden Fig. 50. Technique 2. MIDI. Image: the artist.
Technique 3. Holding the top point of the leaf between thumb and index finger, I drag
my hand downward toward its base (Figure 51). This pitch slide is imaged as a
downward diagonal line (Figure 52). Similarly, if I were to start at the bottom and drag
my hand upward, the pitch slide would follow in that direction.
Fig. 51. Technique 3. Image: Simon Howden. Fig. 52. Technique 3. MIDI. Image: the artist.
Technique 4. Holding the edge of the top edge of a leaf with one hand, and the base
with another, I flutter the top edge (Figure 53). This causes the signal to stutter (Figure
54).
102
Fig. 53. Technique 4. Image: Simon Howden. Fig. 54. Technique 4. MIDI. Image: the artist.
In this situation, where an agave plant becomes an instrument for generating sound
with technical objects whose parameters are open to the corporeal body, hand gestures
need to align with the sound generated by the emergent signal. Such alignment is not
pre-determined, but is in response to the sonics that emerge as a result of the micro-
impulses emitted by the agave, transposed by the MIDI Sprout sensor. The
performative interface created through the conjunction of agave, performer’s gestures,
data and signal networks, is in contradistinction to the way that more traditional
musical interfaces operate. For example, the mechanical interface of the piano is a
keyboard where each note activated is fixed at a designated pitch.16 In my performative
interface, however, pitch changes over time, and is not given in advance: pitch fluidly
shifts through a conjunction of material changes in the apparatus itself (the plant-
signal circuit), the plant’s biological processes, and the materiality of my gestures
applied to the plant’s leaves.
The cause of sonic emissions in plants remains a mystery. It has recently been
established that young corn shoots emit frequencies in the broad range of 10Hz -240
Hz when growing toward a water source which has been blocked off by a unpermeable
barrier. Researchers speculate they are reacting to the sound of the water, not the feel
of it, as was previously thought (Gagliano et. al. 2012:323-4). Yet, the specific reason
for this phenomena is a matter of debate. For example Gagliano et. al. state, ‘we are
growing increasingly doubtful of the idea that all acoustic emissions by plants are the
mere result of the abrupt release of tension in the water-transport system’: instead,
they suggest the clicks might be a form of plant communication (2012:324). However,
such debates are beyond the scope of this research.
103
Fig. 55. Screen capture from HMD with my hand under the avatar. Image: the artist.
As mentioned earlier, the holding gestures for modulating with the agave, not only
provoked sonic disturbances; they also caused the digital augments to materialize
from the Unity SDK via the Leap Motion/Vive headset, mapped to my physical hand
(Figure 55). As I modulated the bio-electrical signal from the agave, digital hand
avatars materially adapted to my body position, shuffling the coordinates of the
finger’s tip and joint values in the graphic models. At times, the low contrast of the
infrared signal made it difficult to discern the position of my physical hands, nested as
they were in the leaves of the plant when manipulating the bio-electrical signal. The
digital augments similarly provided little clue, since their ‘natural’ orientation had
been altered by hand models designed around a re-jointed orientation between fingers
and palm, confusing my apprehension of body and image. My fingers, pushing at the
base of the leaves or fluttering the tips, were largely occluded from the HMD view,
smothered by the purposefully re-jointed augmented hands (Figure 55 and 56). In
Agave Relay, the performer is not responsible for the actual flow of signal, or for its
continued passage, but is able to use tactility to insert disruptions throughout. Such
disruptions are the stuff of performative affect; qualitative insertions into the flow of
signal as it meets my gestures at the surface of the agave.
104
Fig. 56. Agave Relay, screen capture from HMD, with contorted avatar. Image: the artist.
Simon Penny (2009), has commented on the trend, beginning in the 1990s, to
categorise art exploring virtual worlds as ‘Virtual Art’ (such as Grau 2003). Penny
diverges from the neat historicity of this categorisation, noting that many artworks by
influential practitioners such as Raphael Lonzano-Hemmer, Perry Hoberman, David
Rokeby, Char Davies, Jeffrey Shaw and others, could also be discussed as critical
inquiries pertaining to embodiment. Penny states that ‘the substantial work of these
artists - which went largely unremarked in the hysteria of virtuality - was the
development of intuitive bodily interfaces to such worlds’ (2009:8). For Penny, virtual
worlds — far from being defined as virtual by computational attributes — articulate
105
I will now turn to a discussion of two important artworks that similarly deploy HMD’s
to enact border crossings between the liminal boundaries of the physical and the
digital. While the software assemblages of the Tactile Signal performances do not
utilise HMDs to produce immersive VR, nonetheless the artworks that I will
investigate similarly approach digital spaces as entangled with physical materialities.
In the section that follows, I examine VR as it has been used artistically toward
developing a digital body that does not leave behind its corporeality, but rather exists
alongside and in a vibrant relation with its digital avatar as a multiplied embodiment.
My main focus is Adam Nash and Stefan Greuter’s Out of Space (2015)17 but I gesture
initially to Char Davies’ Osmose (1995)18 as a pioneering work in this area. Made
twenty years apart, both artworks advance a critical exploration of embodiment that
uses sensing apparatuses to introduce nuanced corporeal shifts into stereoscopic VR.
The cross-modal application of sensing to the primarily visual VR experience, situates
the interactant in a hybrid and multiplied modality of digital-corporeal realities that
cross thresholds rather than maintain dualities. Both artworks produce an experience
of VR as situated and emergent. At the same time, these experiences are not
completely interior to the interactant, since both use forms of screen
display/projection of the interactant’s experience to include an audience not directly
involved in the VR experience.
Char Davies’ Osmose locates the ‘immersant’19 in a web of different forms and levels
that are navigated using breath. Osmose consists of a series of virtual modules that the
immersant traverses20, while in standing position, wearing a custom-designed vest
measuring breath, mapping the virtual world’s parameters to physiological effects
such as inhaling and exhaling (Davies 1995; Davies and Harrison 1996; Davies 2002).
In the virtual environment, actual biological processes were transposed to digital
models, such as plant photosynthesis in the ‘forest module’ (Jones 1995:25). While
106
emphasis was on the immersive aspects of the data system, where immersants were
calibrated with the digital simulation through the embodied action of their breathing,
the material presentation of the work in the gallery space also incorporated an exterior
perspective. Audience in the space could see the activities of the immersant as they
navigated the virtual modules, shown as a shadow on a translucent screen placed in
front of their body at a human scale (Davies and Harrison 1996:27; Saffer 2008:162);
as well, projected in 3D on another screen in the space, the HMD view of the
immersant was available for the audience to experience (Davies and Harrison
1996:28). The work thus operated in some ways like a performance, where the interior
perspective of the immersant was modulated by exterior perspectives of embodied
action.
Out of Space (2015), by Adam Nash and Stefan Greuter, is a head mounted virtual
reality (HMD VR) artwork, where ‘each interactor / artist creates a unique virtual
work, unique to themselves and yet outside of themselves, in the world, virtually.’
(Everything is Data exhibition catalogue 2015).21 An interactant wearing the Space
Walk VR system (designed by Greuter and David Roberts), accesses the artwork’s
107
virtual world of data and begins to explore. Activated by the movement of the
interactant, virtual objects are called forth from the data system. Yet these objects are
not formed into graphic models, and there are no representational forms to act as
situated guides. Instead, they are visually abstract propositions that never reveal
figuration: this is data, as data. The artwork is formed, as an emergent event, through
the modulatory relations between the interactant’s corporeal body, the intensive
capacities of Nash’s bespoke data system, and the technical affordances that create
mobility in the SpaceWalk System. SpaceWalk is a system for embodiment in VR that
uses the Oculus Rift as a visual display. It pre-dates by about a year the commercial
availability of VR devices that use gestural interfaces in accompaniment with
stereoscopic vision (such as the HTC Vive). Greuter and Roberts note that their
intention behind the design of SpaceWalk was to design a ‘full body immersive virtual
reality platform [that] opens the door to Virtual Reality in small environments, such
as people’s homes, that is compelling, easy to setup and use’ (2014:1).
Data is apprehended by the audience via a projection system in the room: the HMD
view of the participant, projected at scale, operates as a performance for an audience
of bystanders who may soon themselves be participants. Data is relayed to performer
and audience in partial configurations, apprehended depending on subject position.
The data generated by the interactant is not only interior to their perception –
although obviously there is a different sense of embodiment for the interactant than
the audience. The performer cannot see their own body, only the phenomena
generated by their intra-actions with the data system; and, while the audience can see
both projection and performer, they have no access to the perceptual immersion of the
VR experience inside the HMD. In this way, inter-related subjectivities, as visual data
of the performance, operate alongside one another in the exhibition space. As iterative
and temporary assemblages of data (or digital matter) in movement, Out of Space
generates relations in an experimental event that cannot be determined in advance.
Without the arrival of the performer, there is no event, there is no data: the ‘art’ is in
the relation. This is not software as an executable series of commands with an already
determined conclusion, but a relational system of co-composition imbricating
software and bodies across a digital-physical topology.
Vaughan and Nash (2017:150) point to the critical combination of ‘performance’ and
elements of ‘liveness’ when attending to the archiving of digital performance artworks,
108
This chapter touched on the notion of the ‘signaletic’ and allowed that to influence
strategies, methods, and techniques for performative interfacing. The use of infrared
signal as a ground for digital augments, engaged a mix of realities that were
dynamically co-shaped by the signaletic, whose disruptive materiality generated a
version of augmentation that further problematizes the informatic overlay approach.
In the Tactile Signal performances, I engaged a nuanced choreography of intra-
actions in order to co-compose the performance in a relational field that lures infrared
signal, digital augments, hand gestures, and bio-electrical signals into affective and
diffractive composition. Extending this direction in Contact Zone, I will be crafting a
performance that enfolds digital augments, human touch, a computational network,
109
various audio devices and hardware sensors, as well as an agave plant and a living
green wall to the gallery space. I will be co-composing with these elements, while
filming the performance for the audience in real time through the Leap Motion/Vive
apparatus. This entangled performing/filming position will problematize the role of
performer, while the head mounted apparatus will continue to challenge and elaborate
my own sense of embodiment: shortly, enacting practices that negotiate with
nonhuman affect, the software assemblage will also generate new senses of subjectivity
and embodiment.
1 For example, Smithson published a series of photographs of the Yucatan Mirror Displacements
1-9 in Artforum with an accompanying essay called “Incidents of Mirror-Travel in the Yucatan”
(1969).
2 The SpeedTree Modeller is a proprietary application made by Interactive Visualisation Inc. and
widely used to create virtual vegetation for cinematic and game design. Retrieved from
https://store.speedtree.com. (accessed 8 April 2017).
3 In terms of the application of the models to the data system, the way that the Leap Motion SDK
computationally executes the hand objects is divided into two structures: a graphics model and a
physics model, the former producing the appearance of the hand, and the latter the anatomical
mesh. In the Wild Versions, the graphics models were designed around the types of living
vegetation found in the performances’ various locations, while the physics model is based on the
human hand. The plant images texture the mesh of the physics model, forming a loose wrap
around the hand that flows into screen space. In the design of the graphic texture, gaps were
included to abstract, to some degree, the figurations of the hand, and there was an emphasis on
the idea of causing the leaves and branches to partially wrap the hand mesh.
4 Refer to Appendix 1, for accompanying video documentation of the Wild Versions to apprehend
this phenomena.
5 They distinguish between viewing an object in the real world with the naked eye (direct viewing)
year. I use the HTC Vive for my research. Retrieved from https://www.vive.com/au/product/
(accessed 4 February 2018).
110
7 Officially this just works with Oculus Rift, but I have had no issues operating with the HTC Vive.
https://developer-archive.leapmotion.com/gallery/oculus-passthrough (accessed 10 April 2018).
8 The hardware device used here for capacitive touch sensing is the MIDI Sprout. It uses non-
Each VST was modified, by the addition of reverb, delay and perhaps envelope parameters such
as oscillation. My tendency was to select drums rather than pads, and to think closely about
timbre in the context of the tones emitted. Each VST was equalised and compressed in relation to
the overall sequence, with gain adjusted to bring notes with less velocity to alignment with those
with more.
10 To positionally track the user’s head in physical space, the HTC Vive uses a system connected
to two base stations that emit lasers, which are triangulated with a sensor network placed on the
head mount. Differing from the earlier wave of VR devices, the stereoscopic vision of the Vive is
accompanied by (optional) hand held controllers that sense the location of the user’s hands and
triangulate that position with signal from the laser base stations and sensors attached to the
HMD itself (Niehorster, Diederick, Li and Lappe 2017).
11 Pieces for Plants has had numerous public performances and iterations, including at the
Lincoln Center Out of Doors Festival and The Lab in San Francisco. Pieces for Plants was first
performed at the Chapel of the Chimes in Oakland, California in 2001. Retrieved from
http://miyamasaoka.com/work/2006/pieces-for-plants-gallery-installation/ (accessed 19 July
2018).
12 Retrieved from http://miyamasaoka.com/work/2006/pieces-for-plants-gallery-installation/
and relationships with weather and climate’ (Schwartz 2003:1), so phenological time in plants is
based on the biological rhythms of the seasons: the equivalent form of ‘human’ time being
durational.
14 Documentation for this technique can be found here: https://docs.unity3d.com/Manual/class-
Mesh.html
15 Documentation on this technique can be found here: https://docs.unity3d.com/Manual/class-
LineRenderer.html
16 For example, the fifth A (called A440) on an ideal piano is tuned to 440Hz: when you watch a
piano tuner at work, this is the first frequency they demarcate in the system, with the other
frequencies being divided across the remaining 87 keys (Reblitz 1976) so that the system is
considered “even tempered” (Fischer 1975:98). Subtending the instrument to a particular tuning
method, which is universally accepted as correct (prepared piano experiments by John Cage and
others aside), means that a tuned piano always expresses the same spectrum of frequencies.
17 Exhibited at Everything is Data August 14 – September 26, 2015. NTU ADM Gallery 2,
explores and becomes a part of: a world of text and literature through a fog; a forest; a clearing; a
pond; a leaf; one can journey inside the ground; into an abyss; into a world of lines of code’
(1995:24).
21 Retrieved from http://gamedesignresearch.net/out-of-space/ (accessed 4 October 2016).
22 Like Out of Space, all my software assemblages are from ‘generic’ files, that iterate differently
with each successive performance. The versions of the performative interfacings recorded in this
dissertation then, are only one potential version.
111
CHAPTER 4
Rather than conceiving a situation where digital augments are executed by a human
‘user’ as a screen-based interaction, this research has nurtured the idea that both
digital and signaletic materialities emerge through intra-action. Furthermore, the
contingent form of their emergence is actually mutually co-constituted through their
‘entangled agencies’ (Barad 2007:33) with other modes of matter. A persistent refrain
has been the notion that MR emerges off screen as well as on. While it is a technical
necessity that the screen be the primary site where digital materialities are displayed,
my approach to MR pays attention to phenomena that bursts into physical space as
well. In this final and concluding chapter, I discuss the culminating software
assemblage for this research, Contact Zone.1 Thinking with diffraction – a concept I
borrow from Haraway and Barad – as an artistic strategy, I will aim to articulate
performative interfacing in MR, as enmeshed with the intra-active relations between
various phenomena born of plants, corporeality, signal, and code. These phenomena
are generated across, by, and through, different kinds of matter and materialities.
112
For Donna Haraway, diffraction deploys a different optics than a reflective looking,
which seeks to see only the same: ‘diffraction patterns are about a heterogeneous
history, not originals. Unlike mirror reflections, diffractions do not displace the same
image elsewhere’ (Haraway 2000:101). In my research, interference patterns enacted
by diffraction are investigated as entry points for thinking/experiencing material
relations differently. For example, the visual design of the plant-hands that have
seeded their way into several of my software assemblages, de-form the human hand as
they engage semblances of ‘leaf-ness’ oscillating across data, suggesting a more tactile
approach to virtual onscreen space. Yet, this likeness to leaves is not an imitation or
mirroring of a leaf, but a diffractive approach, where the human hand disperses across
physical and digital spaces as well oscillating with plants to co-compose.
For Barad, diffractive approaches aim to ‘produce a new way of thinking about the
nature of difference, and of space, time, matter, causality, and agency, among other
important variables’ (Barad 2007:73). To move beyond the geometry of optics, Barad
conceives diffraction as – first and foremost – a material process that sidesteps what
she terms the ‘self-referential glance back at oneself’, inherent to reflective methods
(88).4 The disturbances caused by diffraction generate patterns of interference that
are performative and entangled, not representational or analogous (88). Barad also
notes that the diffractive movements visible in (for example) ocean waves, also hold at
the smallest scales, such as in the movement of electrons. For example, the Davisson -
113
Germer experiment (1927), found that under certain conditions, electrons shot
through a vacuum would produce both a wave pattern and a particle pattern
(2007:82). Previously, it was thought that electrons either travelled in waves or
particles, but not both under different conditions. This is the core of Barad’s point in
relation to a diffractive methodology, socio-culturally applied: changing the conditions
that cause apparatus and matter to intra-act as they do, can produce radically different
results for the ‘marks’ they leave on bodies. Her re-working of diffraction – via an
understanding of the vibrancy of matter operating at many scales – is highly applicable
to media art, since it assists in situating entangled forces such as signal and data as
they iterate through different configurations of apparatuses.
Through Contact Zone, I will be articulating various diffractive processes that occur as
a result of the intra-actions produced out of an ecology of plants meeting the
electromagnetic spectrum (infrared signal), meeting sonified inaudible frequencies
(ultra-sonic noise), meeting custom designed software (digital code), meeting my
tactile and fluid human gestures (the performing body). Before turning directly to
Contact Zone, it is important to situate my work in the context of some of the different
artistic mobilisations of plant ‘energies’ that have tried to rethink the material
relations made possible by re-assemblages of the technical and organic.
Investigating the various approaches that artists have to the energy coming from
plants, reveals a range of approaches from the aesthetic and sculptural, to the
conceptual and even genetic. Art that connects plants with media technologies
emerged as a notable preoccupation in the 1970s, when influential pieces such as Nam
June Paik’s TV Garden (1974) arranged plants as sculptural elements in a media
environment. The practice of using plants as sculptural objects in a fixed spatial
arrangement, is continued in contemporary art, where various installation artworks
by Olafur Eliasson — such as Forked Forest Path (1998) — chart a visitor’s trajectory
through the gallery amidst organic structures. Artificial environmental realities that
turn interior gallery spaces into scaled-down exterior environments, enfold the visitor
as a relational component in a hybrid ecology.
From the 1970s, seminal compositions for sound performance such as Child of Tree
114
(1975)5 by John Cage, sonified the bio-electrical signals from plants as musical
instruments that were ‘played’ by a performer. Crucial here was the idea that, when
playing these loosely scripted pieces, ‘the focus is on carrying out the action itself,
regardless of consequence’ (Johnson 2003:504). Cage’s sense of music as an action
that negotiated durational time went against metrical conventions inherited from
Western classical music, where music was performed in determined time signatures.
At around the same time, Richard Lowenberg and John Lifton used the ‘gold needle’
technique6 to measure the bio-electrical signals from plants, transferring those signals
to electroencephalography (EEG) devices worn by human interlocutors.7
During the same period, John Baldessari used a Sony PortaPak to make the
provocative conceptual video Teaching a Plant the Alphabet (1972).8 A response to
Joseph Beuy’s performance How to Explain Pictures to a Dead Hare (1965),
Baldessari’s video staged the absurd situation of a human trying to teach a plant to
read, a comment on the ‘hippy’ generation’s desire to communicate with plants.
Baldessari states:
I thought conceptual art at that time was too pedantic. There were many ways
artists used language, so why not try some other way? … Teaching a Plant the
Alphabet was done during the hippy times. There were books about how to
communicate with your plants. I thought, okay, I guess I’ll start with the
alphabet and then we’ll talk … . (Baldessari quoted in Morgan 2009).9
The brutal black humour in Baldessari’s teaching ‘method’ toward his plant-pupil,
helpfully illustrates the differently perceived thresholds between human life and plant
life. Baldessari’s critique is levelled at the tendency in some art of the period, to
attempt to equate the signals coming from plants and transmitted to humans as an
intentional form of communication with humans by plants. That plants use chemical
signals to communicate with one another as well as with predator species in the
specific context of their local ecology, is an emergent research trajectory explored by
evolutionary biologists using detailed sonic and visual imaging techniques (Trewavas
2005; Ferrari, Wisenden and Chivers 2010; Gagliano 2012; Gagliano and Renton
2013). Yet, still other researchers point out that since the accuracy of imaging the
inside of a living plant is unreliable, little is known about the specific cellular
configuration that motivates chemical processes, and while isolated data is collected
115
According to Prue Gibson (2018), a shift is taking place in artistic practice engaging
plants, where the organic realm is no longer treated as primarily aesthetic material.
Rather, artists like Natalie Jeremijenko (Gibson 2018:166) and Eduardo Kac create
artistic projects that communicate the idea that plants have their own particular
agency.10 Combining both issues of genetics and signal, Laura Beloff and Jonas
Jørgensen nurtured a community of Danish Nordmann Fir trees that had been cloned
from the same biological stock (Beloff and Jørgensen The Condition 2015-2016).
Placing them in rotating boxes designed to negate the effects of the earth’s
gravitational pull on growth cycles, their inquiry sought to probe ‘futuristic
speculations on the possibility of plant societies living under radically different
conditions’ (Beloff and Jørgensen 2016:19). Artworks such as those by Beloff and
Jørgensen, engage with processes of plant-signal transduction, which examine the
capacities of a plant’s calcium sensing system.11
Since it takes some time for the viewer to discover the different levels for
modulating and building the virtual plants, he will develop a higher sensitivity
and awareness for real plants (Sommerer and Mignonneau, 1992).13
Sommerer and Mignonneau’s work places plants as active elements of the installation;
they have some agential materiality with a participatory audience. The tuning that
participants must develop to encourage the plants to flourish, is entirely different from
art that uses plants for passive aesthetic purposes. On this point, John Ryan (2015)
provides a useful summary of the shift toward approaching plants as co-composers:
Whereas visual plant art, tactile plant art, and plants-as-art form exact degrees
of representation or manipulation, plant-art produces a flux of meaning
iteratively between the plant, artist, audience, and artwork in sensory contact.
This flux is the basis of the co-becoming between us and other, between nature
and technology, between the vegetal and digital, and is a salient mark of plant-
art (Ryan 2015:54).
Such artworks not only question the idea of plants as passive objects, but extend
human-plant relations to recognise their situated and embodied modes of agency.
Moreover, as manifest in Sommerer and Mignonneau’s statement above, tactility
emerges as a strategy that might afford a richer interrogation of plant-human relations
beyond visual aesthetics.
My own relationship with plants is culturally entangled with my Māori ancestry, where
the organic kingdom is considered to coexist in a radical cosmological contingency
with humans (Reed 1963). Explicated through the concept of mauri, which is the idea
that each element of the natural environment has a ‘life force’ (Pohatu 2011), new
configurations of care and mutual ethical responsibility emerge through kinship links
between human and nonhuman actors (Royal 2003:95). These nurture familial
relations of care or kaitiakitanga (Barlow 1991). Episodes of material transferral and
transmutation from cosmology are numerous, such as the figure of Tane Mahuta, the
symbolic man-tree of the Waipoua Forest who, in human form, brought light to a dark
universe by pushing his parents (the Sky and Earth) apart, creating the conditions for
the material world to flourish. Such cosmological understandings help articulate
117
relations outside of humanist (and Western) paradigms that have artificially separated
nature and culture, a paradigm artists such as Kac also revoke at the genetic level.
I am fascinated with the molecular architecture that plants and animals share,
as well as with the kinds of instrumentation, interdisciplinarity, and knowledge
practices that have gone into the historical possibilities of understanding how I
am like a leaf. (Haraway 2000:132)
the sonic emissions that fill the gallery space, leading to the impression a ‘plant
singing’.15 Contact with the plants via human touch, occurs through the meeting of
human and plant electromagnetic fields, and this contact produces the sound in the
installation. In the previous chapter, we examined the particular material
arrangement of the Tactile Signal performances, where bio-electrical signals from
plants were experimentally deployed as sonic augmentation. In Contact Zone I build
on this prior experimentation, by further interpolating plant signal into relays with
digital augments, and then modulating those using my body.
In the Contact Zone exhibition, this ecology is transported from the post-studio reality
of my backyard and front porch, to the less vibrant interior of the gallery space. A
different proposition for mixing realities is generated inside the gallery; one that
conjoins the two ecologies of media and nonhuman organic matter. As a practice,
augmenting biological environments thinks with human and nonhuman relations as
119
Fig. 57. Green Wall Panel on my front porch, March 2018. Image: the artist.
Contact Zone nurtures the emergence of human and nonhuman matter, as a ‘dynamic
relationality ... being attentive to the iterative production of boundaries, the material-
discursive nature of boundary-drawing practices, the constitutive exclusions that are
enacted, and questions of accountability and responsibility for the reconfigurings of
which we are a part’ (Barad 2007:93). Alongside this thinking, the technical and
expressive design of Contact Zone proceeds from the concept of iteratively re-
assembling many of the previous elements of this research so far.
120
The design of Contact Zone composes a situation where plant bodies resonate with
human bodies and technical devices, enmeshing signals and data as augmented
materialities. It operates in two main parts: firstly, an unstructured visitor-led
experience, where participants can explore the nuances of their hand gestures as they
arise in tandem with a digital avatar, activated alongside plants whose bio-electrical
signals have been sonified; and, secondly, a performance by myself lasting
approximately 15 minutes, utilising the techniques for performative interfacing
discussed throughout the course of this research. While technical details such as the
arrangement of apparatuses, the modules that compose the software, or the placement
of materials in the physical environment, are determined by myself in advance of the
exhibition, the generative and transient relations that coalesce via the entangled
emergence of signal, data and corporeality, emerge as they do on the day. Manifested
by the material-discursive boundary-making practices of agentially real yet nonhuman
entities, several processes that I performatively intra-act with – such as the bio-
electrical signals from plants and the ultra-sonics that create feedback in the room
environment – are indeed quite unstable modes of matter/signal.
A recurring process in my research trajectory has been the re-working of the Leap
Motion gestural controller as a device that is open to more performative modes of
interfacing than was intended by its industrial/commercial designers. Contact Zone
actually inflects the handheld and head mounted permutations of the Leap Motion,
described in the previous two chapters (see Figures 60-62). Folding the two
permutations together in one performance requires I negotiate processes that manage
demanding corporeal movements: as observed in chapter 2, my hands improvise
micro-gestures as they come into contact with plants and their signals: and, as
articulated in chapter 3, my head is the ‘camera’, and it must frame a continuously
tracked point-of-view shot as I trace a pathway through the gallery space. Elsewhere,
I have intra-acted performatively with digital hand avatars using micro-gestures. In
software assemblages such as Tactile Light, Tactile Sound and the Wild Versions, I
watched the screen emergence of the digital hand avatars and adjusted my hand
position and orientation to account for that movement. In that work, I spent much
time attending to the choreographic role of hand gestures during intra-action. Here,
however, my whole body is involved. Before tracing the trajectory of my solo
121
performance, I will unpack the visitor-led encounter, as this will unfold prior to my
elaboration.
Entering the room, visitors to Contact Zone encounter a row of bird of paradise plants
arranged in front of a large LCD screen (Figure 58). Attached to their stems are
ultrasonic microphones as well as a MIDI Sprout sensor, both using different
techniques to capture human inaudible frequencies. Next to the plants is a Leap
Motion gestural interface, the device that will activate the Unity system and send
digital augments to the screen. The screen itself is showing a webcam image stream:
the camera sensor is pointed in the direction of the visitor, but as well, it captures the
environment of the room. In this experience, two parallel processes are manifest:
firstly, picking up the gestural interface, participant’s experience their hands emerging
alongside a digital avatar with which they can co-compose; secondly, by using tactile
gestures on the leaves of the real plants, a sonic frequency shift can be activated, and
thoughtful hands may be able to modulate the plant’s signal. Open to the emergent
potentials of hand gestures in relation to the digital as well as the agential intra-actions
of plant multiplicities, the visitor-led experience allows participants to encounter – on
a more intimate scale – some of the primary materials and processes deployed in the
performance to follow. Visitors are given time with the reactive plants and the gestural
controller before my performance commences.
Fig. 58. Contact Zone Visitor-led experience showing reactive plants, webcam view on screen, and
visitors during intra-action. Installed at Black Box, 19-23 November 2018.
122
Stacked four tiers high with potted plants, is a green wall illuminated by the glow of a
studio light. At its side, two LCD screens are currently black. Soon, one screen will
burst forth with the mediated flow of an infrared signal, sent from the Leap Motion/
Vive apparatus, worn on the head of my performing body. At that moment, my hands
will be pressing the leaves of a living agave plant, working with its bio-electrical
emissions in an attempt to modulate that signal. Emerging in tandem are three
operations: the infrared signal with enfolded augments, passing from Unity to the
HMD to the LCD screen; the bio-electrical signal emitted from the agave, passing to
the MIDI Sprout then to Logic Pro X; and, the hand gestures and shifting body
movements I will be using to modulate the agave’s signal, and at the same time
articulate the augmented infrared signal. Tuning to the bio-electrical signal from an
agave plant, I modulate with its sonics, using the same basic format and gestures as
the Agave Relay performance in chapter 3. Digital augments emerge in tandem with
my hand gestures, as I shift the bio-electrical signal – operating as augmented audio
– coming from the agave plant (Figures 59 and 60).
Fig. 59. Contact Zone video still. Hands modulate the agave as augments emerge in tandem on
LCD screen (LHS). Image: the artist. Full video documentation available:
https://rewawright.com/2018/11/28/mixed-reality-with-plants-data/
123
Fig. 60. Contact Zone, ‘agave modulation’ segment, HMD view, infrared signal captures my hand
( far left) as well as augments. Image: the artist.
When this process ends – about 6 or 7 minutes later – with the agave still bubbling its
sonified signal in the ambient space of the gallery, I move toward the green wall. Still
wearing the HMD, adapting to my new perceptual orientation, I move with slow and
careful gestures.17 Forces such as the weight or proportions of headset, and demands
such as the need to be mindful of cables as I move, will also influence my movements.
How I spatially position my body, will influence the digital augments and bio-electrical
modulations (Figure 61). With my vision limited, my bipedal capacities are necessarily
tentative: accounting for this new embodied perception, it would be unwise to rush.
Fig. 61. Contact Zone. HMD view as performer moves from agave to green wall. Image: the artist.
124
Taking a minute or three to reach the green wall – only a matter of metres away – I
pick up a second Leap Motion interface, configured for handheld performative use and
connected to a second, parallel, computational network from the one that triggers the
Leap Motion/Vive apparatus (the HMD).18 The second LCD screen erupts with an
image stream (Figure 63). However, since my new infrared ‘vision’ transfers only the
electromagnetic radiation present in the room – and pixels on an LCD screen do not
emit heat – I cannot see the results of my intra-actions with the second Leap Motion
interface. Cut-off from the wider scene by my new infrared vision, my gestures must
respond to sound and touch as cues. In this phase of performative interfacing, the
feeling of being in my body guides the gestures I make, allowing tactility and a sense
of embodied movement through space, to take on an amplified role.
Touching the green wall, I aggravate ultrasonic microphones, causing analogue relays
that squeal and feedback in the room (Figure 62).19 Mixed by a sound technician, these
relays are attached to sound design components such as reverberation and delay,
convolved with the MIDI sprout signal from the agave.20 Two sonic signals are now
blended together – from the agave and the green wall – yet I have influence over only
one (via the second handheld Leap Motion). During this phase of performative
interfacing, multiple material flows (digital, signaletic, corporeal, organic) are
conjunctive and contingent. Augmented materialities intra-act together to generate
multiplied phenomena that emerge in tandem, omni-directionally. For example, at the
moment infrared signal meets digital augments there are also physical gestures, bio-
electrical signals and ultrasonic frequencies, all circulating in relays that intersect and
overlap. Processually, augmented materialities augment one another. Circulating
through networked arrangements, the augmented materialities in Contact Zone are
recombinatory entities that recursively combine signal and data, with corporeality.
Disrupted by signal, data, and noise – both visual and sonic – this relational system
places extra demands on my cognitive processes. Operating across different registers
of time, and consumed by a relational field of distortion and interference, my tasks are
multiplied. For example, I must frame the HMD with attention to the placement of
augments, improvise with the sound emitted by the plants, moderate my agitation of
the ultra-sonic disturbances, and improvise my gestures in alignment with the
environment, considered as a relational field of movement. With my perception
defined by the distorted view of the infrared signal, and my body restricted
125
Fig. 62. Contact Zone environment. L-R: LCD screen, performer, green wall. Image: the artist.
126
Fig. 63. Contact Zone. LCD screen installation view showing augments produced by head-mounted
and hand-held Leap Motion devices. Image: the artist.
127
Fig. 64. Contact Zone. HMD screen capture from Leap Motion/Vive apparatus. Temporally
synchronous with Figs. 62 and 63. Image: the artist.
computational networks, tangle with the fleshy material of the body, and the slower
paced agential realities of living plants.
Intra-actions are temporal not in the sense that the values of particular
properties change in time; rather, which property comes to matter is
re(con)figured in the very making/marking of time (2007:180).
Through these relations with other orders of temporality – as well as the perceptual
impact of the spatial phenomena discussed earlier – performative interfacing
implicates me in an alternate reality that exceeds my everyday human embodiment.
We might call this new mode of embodiment a critical posthuman performance
modality. Experiencing the performance via the infrared signal and under the physical
constraints of the HMD, I am acutely aware of this apparatus as instantiating
boundary-making practices that shift my sense of embodiment from human to
something else. That something else, however, is not an enhanced posthumanism. As
has been argued throughout this research, the notion of aligning with a computer
simulation to enter an enhanced state of immersion, also allies itself with the idea of
leaving the corporeal body behind. N. Katherine Hayles has shown that a view that
‘configures human being so that it can be seamlessly articulated with intelligent
machines’ (1999:3) was a popular theme of second order cybernetics. Interrogating
cybernetic narratives that would separate informatic from the human body – such as
Hans Moravec’s Mind Children (1990) where it was tendered that human
consciousness would eventually be uploaded to cyborg bodies – Hayles questioned the
notion that the corporeal body might be replaced by an enhanced posthuman form of
physicality (1999:1). Moreover, she argued vociferously for the need to posit
'interventions ... to keep disembodiment from being rewritten, once again, into
prevailing concepts of subjectivity' (1999:5).
Likewise, Nicole Anderson (2017) discusses the conception she terms the
‘transcendent posthuman’, popularised more recently through the writing of Ray
Kurzweil and others in the futurist camp. In this view, technology is seen as the vector
which will allow humans to transcend the limits of our current biological form: as the
narrative goes, there would no longer be a material separation between virtual and
real, as well as machine and human (2017:18-19). However, the actual possibility of
such a transcendence occurring, is almost certainly – at least in the short to medium
term – positioned in the realms of science fiction. For Anderson, this concept of a
transcendent posthumanism is anchored by the assumption that the human species is
separated from the animal kingdom and therefore should take a dominant role in
human-animal relations (33). Anderson suggests that a more productive thread – in
sympathy with the critical posthumanism pioneered by Hayles and others (see
Braidotti 2006, 2013; Ferrando 2013, 2016)– would be to ‘remind ourselves that
130
humans are always already part of the biosphere’: such a position might allow ‘us to
learn to live with these nonhuman others rather than in opposition to, in domination
of [them]’ (Anderson 2017:37).
In my Leap Motion/Vive performances in MR, I do not leave the body behind, so much
as multiply its instances, so that it is emergent in different modes that trouble the
artificially imposed separation between physical and digital topologies. Taken up by
different modes of matter, my body re-emerges in partial transmissions as code,
signal, and movement. Through strategies of performative interfacing via the software
assemblage formulation, my corporeality is contingently diffracted to digital space,
while concurrently adapting to new senses of the physical. The feeling of embodiment
generated here is a transient physiological state that intra-actively shifts with
technology. My research has chosen to articulate a view of human becoming with
technology, where the body is not a discrete or fully formed entity prior to contact with
technological devices or computational networks. Contact Zone is the last phase in a
research process that began with an interrogation of digital augments as informatic
131
1 Contact Zone is the name of the exhibition to be installed as the examination of this work.
Exhibition dates are 19-23 November 2018, at the Black Box, University of New South Wales,
Faculty of Art and Design, Greens Road, Paddington, Sydney.
2 Interactive Plant Growing (1992) is the first of many plant sensing artworks made by
Sommerer and Mignonneau. In the permanent collection of the ZKM Media Museum, Karlsruhe.
3 This artwork has had over 100 presentations since 2007, notably at ZKM Karlsruhe Centre for
Art and Media (Germany), at Daejeon Museum of Art (Korea), at Museum Art Gallery of Nova
Scotia (Canada), at National Centre for Contemporary Arts (Moscow), at Contemporary Art
Museum Raleigh (USA). http://www.scenocosme.com/akousmaflore_en.htm.
4 Barad gives an example of diffraction from classical physics, where ocean waves hit a rock, and
the rock operates like a ‘diffraction apparatus’ causing the wave to spread, overlap and bend in all
directions. In Barads’ quantum elaboration of diffraction, she argues that waves are not ‘things’
or ‘objects’ but ‘disturbances (which cannot be localised to a point) that propagate in a medium’
(74-76). In a quantum understanding, the intra-active phenomena caused by diffractive
processes, such as the impact and force of the waves hitting the rock – actually shift the matter
that molecularly composes it. Therefore, diffraction effects not only in the wave itself (the visible
pattern of interference seen by the human eye), but also the object the wave hits (a phenomena
that would only be visible using a quantum imaging apparatus).
5 First performance at the Shiraz Festival of the Arts, Iran, 1975.
6 For example, in biological science, a number of techniques exist that base themselves on
inserting needles into the root system of plants to measure electrical capacitance, thereby
determining root size and mass (Chloupek 1972; Rajkai, Végh, and Nacsa 2005). These, and
similar, scientific techniques for measurement, have been adapted by artists since the 1970s into
live performance devices that transduce energy from plants into voltage.
7 Notably, Lowenberg and Lifton created bio-sensing artworks included in the film the Secret Life
of Plants (1976).
8 John Baldessari (1972) Duration 00:18:08, United States, B&W, 1/2” open reel video.
9 Jessica Morgan (2009) “Somebody to talk to: John Baldessari.” Tate Etc. issue 17: Autumn
petunia, creating an entirely new transgenic creation that contests ‘our understanding of the
‘natural’ environment as well as of the environment of art’ (Osthoff 2009:1).
11 Research by biological scientists at the experimental edge of plant sensing, draws conclusions
Graphics and Interactive Techniques, Anaheim, CA, USA — August 02 - 06, 1993.
13 Retrieved from http://www.interface.ufg.ac.at/christa-
lived status of being merely raw material or tools’ (Haraway 2007: 206).
15 Analogies with ‘song’ are invoked in many descriptions, such as here:
https://www.digitalartarchive.at/database/general/work/akousmaflore.html
132
16 Following a humanist analysis, nature is considered as a resource within the purview of homo
sapiens, where its role in supporting human life is foregrounded. An increasing number of
thinkers break with this convention (including Haraway 2000; Plumwood 2002; Roa-Rodríguez
and van Dooren 2008; Cubitt 2017) to argue that nature classified as human property only serves
to encourage its exploitation. Resource-driven approaches to nature pose a fundamental
problem, since they position nature under the control of a regulatory web that is structured by
flows of capital. Irigaray and Marder frame what is at stake: ‘The fight over the appropriation of
resources will lead the entire planet to an abyss unless humans learn to share life, both with each
other and with plants. … The lesson taught by plants is that sharing life augments and enhances
the sphere of the living, while dividing life into so-called natural or human resources diminishes
it’ (Irigaray and Marder 2014).
17 The written description of Contact Zone given here is specific to the installation at the Black
Box, 19-23 November 2018. However, the photographic images referenced in Figures 62-70 are
from Contact Zone video documentation (see Appendix 1), which was a rehearsal for the
examination performance, enacted at the artist’s studio.
18 Timecode reference in video documentation is 01:45. See Appendix 1.
19 Timecode reference in video documentation is 02:46-04:25. See Appendix 1.
20 During the 15 minute performance, the sound technician is primarily mixing the signals from
the MIDI Sprout with the ultra-sonic microphone inputs, paying particular attention to the
analogue feedback so it does not overpower the quieter micro-impulses from the agave plant.
21 Having performed fairly extensively as a musician, there is no equivalent I have found to this
kind of ‘playing’. That is why I have preferred the term ‘modulation’ in this thesis. Playing
assumes that interfacing will occur through a recognised and verified layout, determined in
advance. In the agave modulation – as well as in the visitor-led experience that precedes it –
there is no pre-determined structure of ‘notes’, since the signals emitted by the plant in any given
instant are unknown in advance. Signal only becomes a ‘note’ after it is converted to a MIDI
sequence in Logic Pro X.
22 Timecode reference in video documentation is 00:54. See Appendix 1.
133
CONCLUSION
The tangible outcomes of this practice-based research are knotted together as two
interwoven strands: the software assemblage as an assistive formulation for
generating an alternative version of MR; and, a set of techniques and methods for
performative interfacing with augmented materialities. Both strands have emerged in
tandem, through strategically enacted re-combinations of theory and practice. To re-
work digital augments from their conventional position as informatic overlays, toward
an entangled relation that would offer the potential for oscillations across digital and
physical topologies, I have investigated the software assemblage as a radical
networked arrangement. Pursued through a suite of techniques that performatively
interface with MR, the speculation that digital augments can be more that informatic,
has led to the conception that augments might also exceed the purely digital: that they
are, in fact, augmented materialities.
Digital augments, in the sense they are outlined in computer science and engineering,
are considered to be discretely formed in advance of their mutual and reciprocal
contact with one another. It has been argued that, while digital augments are data,
they do not necessarily need to be fixed as informatic: I have suggested that the
informatic overlay approach is unnecessarily dominant in current practice and
discourse in MR, having migrated from technical paradigms, through to culturally
engaged fields. Therefore, the first task of this thesis was to clarify the operations of
digital augments as data, outside of the informatic. Loosening data from the informatic
allowed augments to be imbricated with software assemblages, where they were re-
combined with unexpected materials, such as living plants. Within the software
assemblage formulation, performative interfacing has operated as a strategy with
which to generate MR artwork that is iterative and affective. Using performative
interfacing as a strategy, also de-structured the normative uses of certain technical
instruments for delivering MR (like the Leap Motion) and gave consideration to the
ways that we might generate tactile, signaletic, algorithmic, and gestural, patterns of
interference. The results of these patterns (left behind on bodies-data-plants) are the
134
Through the software assemblage, this research has developed techniques for
affectively co-composing with expressive conjunctions of materials, including:
recursive strategies for modulating digital augments; amplifications of bio-electrical
data from plants to produce augmented audio; choreographing hand micro-gestures
in tactile and signaletic connections with both augments and plants; and passing
augments through the Leap Motion interface in two hardware configurations,
handheld and head mounted, eventually folding the two methods together in Contact
Zone.
The network of matter and materials arranged by the software assemblages in this
research, can be broadly understood as a relational field of movement, further
troubled by the patterns of interference caused by material diffractions. Here,
entanglements between modes of matter (infrared, bio-electrical, fleshy, and digital)
shifted constellations of bodies-plants-data, as they dynamically and rhythmically
aligned in motion. Moreover, alignment was not considered as pre-given via a
computational system. Rather, it was argued that, since the emergence of augmented
materialities is entangled, intra-actively, with corporeality and the signaletic,
alignment must likewise be rhythmic, speciated, and felt through the relational field
with which it co-emerged (Manning 2o13:210). Techniques for performing in such
rhythmic alignments, used my body movements to diffract signaletic and digital flows,
and embraced tactility to shift relations with other materialities. Thought was given to
methods that might bring together structurally separate sites, such as physical and
135
digital. To achieve this I explored: the disjunctive software tracking afforded by signal
inertia (chapter 3); the enfolded materiality of the infrared signal as it passed through
and re-emerged from, the Leap Motion/Vive apparatus (chapters 3 and 4); and, ways
that I could use my body to shift relations with flows of digital and signaletic matter,
both of which were understood as augmented materialities.
Several modes of signal have been explicated as diffractive, nonhuman forces that
emerge in the relation: at first, signal inertia caused by webcam delay in the Wild
Versions, opened up a durational gap that modulated my hand gestures and the data
system’s response; then, in the Tactile Signal performances, plants as producers of
bio-electrical signals, manifested an alternative approach to augmented audio; and, in
the Tactile Signal performances, as well as Contact Zone, the Leap Motion’s infrared
signal was used to re-work the materiality of the digital augments, enfolding them
within an electromagnetic plane. Through performative intra-actions in the Tactile
Signal performances as well as in Contact Zone, augmented materialities emerged
through a relational multitude of entangled movements, where human and nonhuman
forces emerged, co-composed, and diffracted through one another.
An unexpected impact of this project is that the performative approach of the software
assemblage has resonated with certain artistic practices from mobile AR that likewise
explore aspects of embodiment. In 2016, the notable AR/VR artists Tamiko Thiel and
Will Pappenheimer presented a paper at the College Art Association (CAA) Conference
in Washington, where they elaborated the software assemblage formulation in
combination with their practice of décollage in public space. This talk was followed
soon after by an article in the well-regarded Media-N: the Journal of the New Media
Caucus (Thiel and Pappenheimer 2016). Such unexpected interest from practitioners
in the AR art community – producing artwork entirely different from my own –
underscores the potential for the pragmatic application of the software assemblage
outside of my forays.
There is significant future scope for research in the field of MR in media art. Many
uninterrogated issues remain, relating to the affective potential of bodies, the impact
of the virtual on our expanded senses of perception, and the nuanced modes of
embodiment that stretch outside of screen space (to name just a few). This Doctoral
dissertation has attempted to analyse the notion of what an augment might do, and
open that to a broader and more extensive conception of materiality than was afforded
by taxonomic models such as the RV Continuum and commercial/engineering
paradigms like the informatic overlay.
With Haraway, my hope is that this research has in some small way diffracted ‘the rays
of technoscience [to] get more promising interference patterns on the recording films
of our lives and bodies’, where ‘life’ is a category that encompasses the nonhuman, and
‘bodies’ are digital, human, and plant (Haraway 1997:16). Rather than setting up a
prescription for how to create or design with augments, I have asked what augmented
materialities are capable of, as they resonate through rhythmic constellations, oscillate
in recursive relays, and generate disturbances that modulate through networks.
Apprehending augmented materialities via the software assemblage takes us beyond
138
the informatic overlay approach, teasing out some of the trouble that lurks in an
alternatively assembled MR, located at the edge of control.
139
References
Aceti, Lanfranco and Richard Rinehart. 2013. Not Here Not There, Leonardo
Electronic Almanac. 192.
Andersen, Christian Ulrik, Søren Bro Pold, eds. 2011. Interface criticism: Aesthetics
beyond the buttons. Aarhus: Aarhus University Press.
Andersen, Christian Ulrik and Søren Bro Pold. 2018. The Metainterface: The Art of
Platforms, Cities, and Clouds. Cambridge, MA: The MIT Press
Ando, Ki, Yuki Hasegawa, Tamaki Yaji, and Hidekazu Uchida. 2011. "Study of plant
bioelectric potential response due to photosynthesis reaction". Pp. 337-342.
IEEJ Transactions on Sensors and Micromachines, (131).
Ariso, José Maria, ed. 2017. Augmented Reality: Reflections on Its Contribution to
Knowledge Formation. Berlin: De Gruyter.
Aghajan, Zahra M., Lavanya Acharya, Jason J. Moore, Jesse D. Cushman, Cliff
Vuong, and Mayank R. Mehta. 2015. "Impaired spatial selectivity and intact
phase precession in two-dimensional virtual reality". Nature. 18:121.
Azuma, Ronald. T. 1997. “A survey of augmented reality.” in Presence: Teleoperators
and virtual environments 6. 4:355-385. Cambridge, MA: The MIT Press
Journals.
Baldessari, John. 1972. Teaching a plant the alphabet. Duration 00:18:08,
B&W, 1/2” open reel video.
Ballard, Dana H., and Christopher M. Brown. 1982. Computer Vision: Stereo Vision
and Triangulation.
Barad, Karen. 1996. "Meeting the universe halfway: Realism and social
constructivism without contradiction". Pp 161-194. In Feminism, science, and
the philosophy of science, edited by J. Nelson. Switzerland: Springer Science
& Business Media.
Barad, Karen. 2003. “Posthumanist performativity: Toward an understanding of how
matter comes to matter”. Pp 801-831. Signs: Journal of women in culture and
society (283).
Barad, Karen. 2007. Meeting the universe halfway: quantum physics and the
entanglement of matter and meaning. Durham, NC: Duke University Press.
140
Barakonyi, István, and Dieter Schmalstieg. 2005. “Augmented reality agents in the
development pipeline of computer entertainment”. In Proceedings of the
International Conference on Entertainment Computing. 345-356.
Barlow, Cleve. 1991. Tikanga Māori. Auckland: Oxford University Press.
Bekele, Mafkereseb Kassahun, Roberto Pierdicca, Emanuele Frontoni, Eva Savina
Malinverni, and James Gain. 2018. "A Survey of Augmented, Virtual, and
Mixed Reality for Cultural Heritage". Journal on Computing and Cultural
Heritage (112).
Beloff, Laura, and Jonas Jørgensen. 2016. “The Condition: Towards Hybrid Agency”.
Pp 14-19. In CULTURAL R>EVOLUTION: Proceedings of the 22nd
International Symposium on Electronic Art.
Benford, Steve, Martin Flintham, Adam Drozd, Rob Anastasi, Duncan Rowland, Nick
Tandavanitj, Matt Adams, Ju Row-Farr, Amanda Oldroyd, and Jon Sutton.
2004. “Uncle Roy All Around You: Implicating the City in a Location-Based
Performance.” Retrieved 9 September 2014.
http://www.blasttheory.co.uk/wp-
content/uploads/2013/02/research_uraay_implicating_the_city.pdf
Benford, Steve and Gabriella Giannachi. 2011. Performing mixed reality. Cambridge,
MA: The MIT Press.
Bennett, Jane. 2009. Vibrant matter: a political ecology of things. Durham, NC:
Duke University Press.
Berry, David. M. 2011. The philosophy of software: Code and mediation in the
digital age. London: London: Palgrave Macmillan.
Billinghurst, Mark, Adrian Clark and Gun Lee. 2015. "A survey of augmented reality".
Pp. 73-272. In Foundations and Trends in Human–Computer Interaction
(8).
Blast Theory 2003. Uncle Roy All Around You. Site specific mixed media artwork,
various locations, London, U.K. Premiered at the Institute of Contemporary
Arts in London in June 2003. Retrieved 11 April 2014.
https://www.blasttheory.co.uk/projects/uncle-roy-all-around-you/
Blast Theory. 2009. FlyPad. Site specific mixed media artwork designed for the
Public Gallery, West Bromich, England. Retrieved 14 August 2016.
https://www.blasttheory.co.uk/projects/flypad/
141
Bögre, László, and Gerrit Beemster, eds. 2008. Plant growth signaling. Springer
Science & Business Media.
Bolter, Jay and Richard Grusin. 1999. Remediation: Understanding new media.
Cambridge, MA: The MIT Press.
Bolter, Jay & Diane Gromala. 2003. Windows and mirrors: Interaction design,
digital art, and the myth of transparency. Cambridge, MA: The MIT Press.
Boj, Clara, and Diego Díaz. 2008. "The Hybrid City: Augmented Reality for
Interactive Artworks in the Public Space". In The Art and Science of Interface
and Interaction Design. 141-161.
Bonta, Mark and John Protevi. 2004. Deleuze and Geophilosophy a guide and
glossary. Scotland: Edinburgh University Press.
Braidotti, Rosi. 2006. "Posthuman, All Too Human". Theory, Culture & Society. 23:
197–208.
Braidotti, Rosi. 2013. The posthuman. Oxford, U.K.: Polity Press.
Buren, Daniel, and Thomas Repensek. 1979. "The function of the studio." October.
10:51-58.
Cage, John. 1975. Child of tree. Edition Peters Group, Frankfurt/Main, Leipzig,
London, New York.
Candy, Linda. 2006. Practice-based research: a guide. Retrieved June 16, 2015.
https://www.creativityandcognition.com/resources/PBR%20Guide-1.1-
2006.pdf. Creativity and Cognition Studios, UTS Sydney.
Cárdenas, Micha, Christopher Head, Todd Margolis, and Kael Greco. 2009.
"Becoming Dragon: a mixed reality durational performance in Second Life".
The Engineering Reality of Virtual Reality. 7238:801-807.
Cardiff , Janet & George Bures Miller. 2014. the City of Forking Paths, Augmented
Reality app available in various locations, The Rocks, Sydney, Australia.
Retrieved 16 April 2014. https://itunes.apple.com/us/app/the-city-of-
forking-paths/id870332593?mt=8.
Carmigniani, Julie, and Borko Furht. 2011. "Augmented reality: an overview". Pp. 3-
46. In Handbook of augmented reality, edited by Borko Furht. Switzerland:
Springer International Publishing.
Carroll, John M., ed. 2003. HCI models, theories, and frameworks: Toward a
multidisciplinary science. Amsterdam, Netherlands: Elsevier.
142
Caudell, Thomas P., and David W. Mizell. 1992. “Augmented reality: An Application
of Heads-up Display Technology to Manual Manufacturing Processes.” Pp
659-669. InProceedings of the Hawaii International Conference on System
Sciences.
Chloupek, O. 1972. “The relationship between electric capacitance and some other
parameters of plant roots,” Biologia Plantarum (143). Pp. 227–230. doi:
10.1007/bf02921255.
Chun, Wendy Hui Kyong. 2011. Programmed visions: Software and Memory.
Cambridge, MA: The MIT Press.
Cohen, Michael, Shigeaki Aoki, and Nobuo Koizumi. 1993. "Augmented audio reality:
Telepresence/VR hybrid acoustic environments." In Proceedings of the 2nd
IEEE International Workshop on Robot and Human Communication. 361-
364.
Conger, Kate. 2016. " Niantic responds to senate inquiry into Pokémon GO privacy".
TechCrunch Magazine. Retrieved January 12, 2017
https://techcrunch.com/2016/09/01/niantic-responds-to-senate-inquiry-
into-pokemon-go-privacy/.
Cubitt, Sean. 2017. Finite media: Environmental implications of digital
technologies. Durham, NC: Duke University Press.
Davies, Char. 1995. "Osmose: Notes on Being in Immersive Virtual Space". ISEA ’95
Conference Proceedings, Montreal, Canada.
Davies, Char. and John Harrison. 1996. "Osmose: towards broadening the aesthetics
of virtual reality". Pp 25-28 In Computer Graphics (304). ACM SIGGRAPH.
Davis, Lucy. 2011. “ In the Company of Trees.” Pp.43-62. In Antennae: the Journal
of Nature and Culture. Issue 17 Summer.
DeLanda, Manuel. 1998. “Meshworks, hierarchies, and interfaces”. John Beckman,
ed. The Virtual Dimension: Architecture, Representation, and Crash Culture.
New York: Princeton Architectural Press. Retrieved December 12, 2014
http://cumin-cad.architexturez.net/system/files/pdf/7f71.content.pdf
DeLanda, Manuel. 1997. A thousand years of nonlinear history. New York: Zone
Books.
DeLanda, Manuel. 2008. "The expressivity of space". Canadian Art Magazine, Issue
252. Pp.103-107.
143
Deleuze, Gilles. and Felix, Guattari. 1987. A thousand plateaus: Capitalism and
schizophrenia. Trans. Brian Massumi. Minnesota: University of Minnesota
Press
Dix, Alan. 2009. "Human-Computer Interaction". In Encyclopedia of Database
Systems, edited by L. Liu and M.T Özsu. Boston, MA: Springer Publishing.
Dourish, Paul. 2004. Where the action is: the foundations of embodied interaction.
Cambridge, MA: The MIT Press.
Dourish, Paul. 2017. The stuff of bits: An essay on the materialities of information.
Cambridge, MA: The MIT Press.
Doyle, D. 2014. “New Opportunities for Artistic Practice in Virtual Worlds”. In
Cyberworlds CW., 2014 International Conference on pp. 321-326.. New York;
IEEE Press.
Dyson, Freeman J. 1998. "Science as a craft industry". Science, 2805366., pp.1014-
1015.
Cambridge, MA: The MIT Press.
Ekman, Ulrik. 2013. "Of Intangible Speed: 'Ubiquity' as Transduction of
Interactivity". Pp. 279-309. In Throughout: Art and culture emerging with
ubiquitous computing, edited by Ulrik Ekman. Cambridge, MA: The MIT
Press.
Ekman, Ulrik. 2012. "Of the Untouchability of Embodiment I: Rafael Lozano-
Hemmer's Relational Architectures", retrieved from
https://journals.uvic.ca/index.php/ctheory/article/view/14943/5838
Engberg, Maria. and Jay Bolter. 2014. “Cultural expression in augmented and mixed
reality”. Pp.3-9. In Convergence (20:1).
Ferguson, Russell, M. Tucker and John Baldessari. 1990. Discourses Conversations
in Postmodern Art and Culture. New York : New Museum of Contemporary
Art ; Cambridge, MA. : the MIT Press.
Ferrando, Francesca. 2013. "Posthumanism, transhumanism, antihumanism,
metahumanism and new materialism: Relationships and differences." Pp 26-
32. In Existenz, An International Journal of Philosophy, Religion, Politics,
and the Arts. Volume 8. No. 2, Fall 2013.
Ferrando, Francesca. 2016. “The Party of the Anthropocene: Post-humanism,
Environmentalism and the Post-Anthropocentric Paradigm Shift.” Pp 159-173.
In Relations, 4.2.
144
Ferrari Maud, Brian Wisenden and Douglas Chivers 2010. "Chemical ecology of
predator–prey interactions in aquatic ecosystems: A review and prospectus."
Pp 698-724. In Canadian Journal of Zoology (88).
Fischer, J.C. 1975. Piano tuning: a simple and accurate method for amateurs.
Courier Corporation.
Fisher. J. 1999. “Char Davies.” Pp 53-54. In Parachute (94).
Fromm, J. and Lautner, S. 2007. "Electrical signals and their physiological
significance in plants." Pp 249-257. In Plant, cell & environment (303).
Fuller, Mathew. 2005. Media ecologies: Materialist energies in art and
technoculture. Cambridge, MA: The MIT Press.
Fujii, K. and Okumura, Y. 2012. "Effect of earth ground and environment on body-
centric communications in the MHz band." In International Journal of
Antennas and Propagation.
Freeman, John. C. et.al 2012. "ManifestAR: an augmented reality manifesto." Pp
82890D-82890D. In IS&T/SPIE Electronic Imaging. International Society
for Optics and Photonics.
Friedberg, Anne. 2006. The virtual window: from Alberti to Microsoft. Cambridge,
MA: the MIT Press.
Gagliano, Monica, Mancuso, S. and Robert, D. 2012. "Towards understanding plant
bioacoustics." Pp.323-325. In Trends in Plant Science (176).
Gagliano. Monica. and Renton, M. 2013. "Love thy neighbour: Facilitation
through an alternative signalling modality in plants." BMC Ecol 2013; 13:19;
PMID:23647722; http://dx.doi. org/10.1186/1472-6785-13-19
Galloway, Alexander. 2013. The Interface effect. Oxford, England: Polity Press.
Gemeinboeck, Petra. and Saunders, Rob. 2011. “Other Ways Of Knowing: Embodied
Investigations of the Unstable, Slippery and Incomplete." In, The Fibreculture
Journal, FCJ-120.
Geroimenko, Vladimir. ed. 2014. Augmented reality art: from an emerging
technology to a novel creative medium. 1st edition. Switzerland: Springer
International Publishing.
Geroimenko, Vladimir. ed. 2018. Augmented reality art: from an emerging
technology to a novel creative medium. 2nd edition. Switzerland: Springer
International Publishing.
145
Giannachi, Gabriella, Duncan Rowland, Steve Benford, J. Foster, Matt Adams, and
Alan Chamberlain. 2010. "Blast Theory's Rider Spoke, its documentation and
the making of its replay archive." Pp 353-367. In Contemporary Theatre
Review (203)
Gibson, Prudence. 2018. The Plant Contract: Art’s Return to Vegetal Life.
Amsterdam: Brill.
Grau, Oliver. 2003. Virtual Art: From illusion to immersion. Cambridge, MA: the
MIT Press.
Gregg, Melissa, and Gregory J. Seigworth, eds. 2010. The affect theory reader.
Durham, NC: Duke University Press.
Greuter, Stefan, and David Roberts. 2014. "Spacewalk: Movement and interaction in
virtual space with commodity hardware." Pp. 1-7. In Proceedings of the 2014
Conference on Interactive Entertainment Association for Computing
Machinery.
Guna, J., Jakus, G., Pogačnik, M., Tomažič, S., & Sodnik, J. 2014. "An analysis of the
precision and reliability of the leap motion sensor and its suitability for static
and dynamic tracking." Sensors, 142., pp. 3702-3720.
Hall, Michael. 2011. Plants as persons: a philosophical botany. Albany: State
University of New York Press.
Hansen, Mark. B., 2012. Bodies in code: Interfaces with digital media. New York:
Routledge.
Haraway, Donna J. 1997. Modest_Witness@ Second_Millennium .FemaleMan
_Meets_OncoMouse: Feminism and Technoscience. New York: Routledge.
Haraway, Donna J. 2000. How like a leaf: an interview with Thyrza Nichols
Goodeve. New York: Routledge.
Haraway, Donna J. 2003. The companion species manifesto: dogs, people, and
significant otherness. Chicago, Ill. : Prickly Paradigm.
Haraway, Donna J. 2007. When Species Meet. Minneapolis: University of Minnesota
Press.
Haraway, Donna. "Anthropocene, Capitalocene, Chthulhocene. Donna Haraway in
Conversation with Martha Kenney." Pp.255-270. In Art in the Anthropocene,
edited by Heather Davis and Etienne Turpin. London, U.K.: Open Humanities
Press.
146
Harma, Aki, Julia Jakka, Miikka Tikander, Matti Karjalainen, Tapio Lokki, and Heli
Nironen. 2003. "Techniques and applications of wearable augmented reality
audio." In Audio Engineering Society Convention Proceedings .114-119.
Hayles, Katherine. 1999. How we became posthuman: virtual bodies in cybernetics,
literature, and informatics. Chicago: University of Chicago Press.
Henchoz, Nicholas, Vincent Lepetit, Pascal Fua, John Miles. 2011. “Turning
Augmented Reality into a media: Design exploration to build a dedicated
visual language.” Pp 83-89. In Proceedings of the International Symposium
on Mixed and Augmented Reality. New York; IEEE Press.
Hollerer, Tobias, Dieter Schmalstieg and Mark Billinghurst. 2009. “AR 2.0: Social
Augmented Reality - social computing meets Augmented Reality”. 8th IEEE
International Symposium on Mixed and Augmented Reality. doi: 1
0.1109/ismar.2009.5336443. New York; IEEE Press.
Hookway, Branden. 2014. Interface. Cambridge, MA: the MIT Press.
Housefield, John. 2007. Sites of time: organic and geologic time in the art of Robert
Smithson and Roxy Paine. Cultural Geographies, 144., pp.537-561.
Huang, Weidong, Leila Alem, and Mark Livingston eds. 2012. Human factors in
augmented reality environments. Springer Science & Business Media.
Huhtamo, Erkki. 2004. "Trouble at the interface, or the identity crisis of interactive
art." Retrieved 9 October 2016.
http://mediaartscultures.eu/jspui/bitstream/10002/299/1/Huhtamo.pdf
Irigaray, Luce and Michael Marder. 2014. " Without clean air, we have nothing".
Retrieved 6 March 2015.
https://www.theguardian.com/commentisfree/2014/mar/17/clean-air-
paris-pollution-crime-against-humanity.
Johnson, C.G. 2003. “Towards a prehistory of evolutionary and adaptive
computation in music”. In Workshops on Applications of Evolutionary
Computation pp. 502-509. Springer: Berlin, Heidelberg.
Johnston, John. 2008. The Allure of Machinic Life: Cybernetics, Artificial Life, and
the new AI. Cambridge, MA: The MIT Press.
Jones, Mark. 1995. "Char Davies: VR through Osmosis". Pp. 24-28. In CyberStage,
Vol. 2 (1) (Fall 1995).
147
Juhász, L. and Hochmair, H.H. 2017. Where to catch ‘em all?–a geographic analysis
of Pokémon Go locations. Pp. 241-251. In Geo-spatial information science,
(203)
Kaiser, Phillip & Miwon Kwon. 2012. Ends of the Earth: Land Art to 1974. Prestel
Publishing.
Kent, James 2012. The Augmented Reality Handbook-Everything you need to know
about Augmented Reality. Emereo Publishing.
Klemmer, Scott, Hartmann, B. and Leila Takayama. 2006. “ How bodies matter: five
themes for interaction design.” Pp. 140-149. In Proceedings of the 6th
Conference on Designing Interactive systems. Association for Computing
Machinery.
Kwon, Miwon. 2004. One place after another: Site-specific art and locational
identity. Cambridge, MA: the MIT Press.
Lambert-Beatty, Carrie. 2008. Being watched: Yvonne Rainer and the 1960s.
Cambridge, MA: the MIT Press.
Leap Motion SDK. 2010-present. https://www.leapmotion.com/ Retrieved March 1,
2015.
Levin, Golan, Chris Sugrue and Kyle McDonald. 2014. Augmented Hand Series.
Cinekid Festival, Amsterdam. Retrieved 18 November 2016.
http://www.flong.com/projects/augmented-hand-series/
Lévy, Pierre. 2001. Cyberculture. Minneapolis: University of Minnesota Press.
Lichty, Patrick. 2014. "The Aesthetics of Liminality: Augmentation as an Art Form."
Pp. 99-125. In Augmented Reality Art, edited by Vladimir Geroimenko.
Switzerland: Springer International Publishing.
lifestyles.info/2008/10/23/wikitude-android-app-with-augmented-reality-
mind-blowing/
Plumwood Val. 2002. Environmental Culture: The Ecological Crisis of Reason
New York: Routledge.
Pohatu. Taina.W. 2011. “Mauri: Rethinking Human Well-being.” In MAI Review,
(2011:3).
Portanova, Stamatia. 2013. Moving without a body: Digital philosophy and
choreographic thought. Cambridge, MA: The MIT Press.
Rainer, Yvonne 1966. Hand Movie. 8mm black and white film. Retrieved 20
December 2016. https://coub.com/view/80y37.
Rainer, Yvonne. 1974. Work 1961-73. Halifax, N.S.: Press of the Nova Scotia College
of Art and Design.
Rajkai, K., Végh, K. R. & Nacsa, T. 2005. “Electrical capacitance of roots in relation
to plant electrodes, measuring frequency and root media.” Pp. 197–210. In
Acta Agronomica Hungarica (532)
Reblitz, Arthur A. 1976. Piano servicing, tuning, & rebuilding: For the professional,
the student, the hobbyist. Vestal Press.
Reed, A.H 1963. Treasury of Māori folklore. Wellington, New Zealand: A.H. & A.W.
Reed.
Rheingold, Howard. 1991. Virtual Reality: Exploring the Brave New Technologies of
Artificial Experience and Interactive Worlds-From Cyberspace to
Teledildonics. England, London: Secker & Warburg.
Riley, Mathew and Troy Innocent. 2014. "The augmented bush walk: adaptation in
crossmedia ecologies." Pp. 234-247. In Proceedings of xCoAx 2014, Portugal:
Universidade do Porto.
Riley, Mathew and Adam Nash. 2014. "Contemplative interaction in mixed reality
artworks". Pp. 260-266. In Proceedings of the 20th International Symposium
on Electronic Art. Retrieved 8 May 2016. Dubai, United Arab Emirates, 30
October - 8 November 2014,
Roa-Rodríguez, C. and Thom van Dooren. 2008. “Shifting Common Spaces of Plant
Genetic Resources in the International Regulation of Property.” Pp 176-202
in The Journal of World Intellectual Property (113).
Robertson, G., Czerwinski, M. and Van Dantzich, M., 1997. "Immersion in desktop
virtual reality." In Proceedings of the 10th Annual ACM symposium on User
152
Sutherland, Ivan.E. 1968. "A head-mounted three dimensional display". Pp. 757-764.
In Proceedings of the December 9-11, 1968, fall joint computer conference.
part I. Association for Computing Machinery.
Schwartz, Mark Donald, ed. 2003. Phenology: an integrative environmental
science. Switzerland: Springer International Publishing.
Thiel, Tamiko 2011. “Cyber Animism and Augmented Dreams.” in Leonardo Online
Almanac. Retrieved 7 September 2014. http://www.leoalmanac.org/wp-
content/uploads/2011/04/LEA_Cyber-Animism_TamikoThiel.pdf.
Thiel, Tamiko and Will Pappenheimer. 2013- ongoing. Mixed media artwork. First
staged at FACT Gallery Liverpool. Subsequent major iterations at ISEA2014
Dubai. and Virtuale Festival Switzerland. Retreived 18 April
2014.http://www.biomerskelters.com/
Thiel, Tamiko and Will Pappenheimer. 2016. “Assemblage and Décollage in Virtual
Public Space.” Media-N: the Journal of the New Media Caucus, CAA
Conference edition 2016. ISSN: 1942-017X.
Thomsen, Bodil M.S 2011 “The Haptic Interface: On Signal Transmissions and
Events.” In Interface criticism: Aesthetics beyond the buttons, edited by
Christian Ulrik Andersen and Søren Bro Pold. Aarhus University Press.
Thomsen, Bodil M.S. 2012 “Signaletic, haptic and real-time material.” Pp. 1-10.
In Journal of Aesthetics & Culture (41)
Trewavas, Anthony. 2005 “Green plants as intelligent organisms” Pp. 413–419. In
Trends in Plant Science (109), Retrieved 7 May 2017. doi:
10.1016/j.tplants.2005.07.005.
Ulmer, Gregory L., and John Craig Freeman. 2014. "Beyond the virtual public
square: Ubiquitous computing and the new politics of well-being." Pp. 61-79.
In Augmented reality art, edited by Vladimir Geroimenko. Switzerland:
Springer International Publishing.
Unity SDK. Retrieved March 3, 2014. https://unity3d.com/get-unity/download.
Van der Tuin, Iris and Rick Dolphijn. 2012. New materialism: Interviews &
cartographies. Open Humanities Press.
Van Krevelen, D.W.F. and Poelman, R. 2010. “A survey of augmented reality
technologies, applications and limitations.” International Journal of Virtual
Reality (92).
154
Vieira, Patricia, Monica Gagliano, and John Ryan. 2016. The green thread:
dialogues with the vegetal world. Lanham: Lexington Books.
Vincs, Kim, Alison Bennett, John McCormick, Jordan Beth Vincent, and Stephanie
Hutchison. 2014. "Skin to skin: Performing augmented reality." Pp. 161-174.
In Augmented Reality Art, edited by Vladimir Geroimenko. Switzerland:
Springer International Publishing.
Vincs, Kim. 2016. “Virtualizing Dance”. Pp. 263–82. In The Oxford handbook of
Screendance studies, edited by Douglas Rosenberg. New York: Oxford
University Press.
Weibel, Peter. 2001. Olafur Eliasson: Surroundings surrounded: essays on space
and science. Cambridge, MA: The MIT Press.
Weinzierl, Stefan, and Steffen Lepa. "On the Epistemic Potential of Virtual Realities
for the Historical Sciences. A Methodological Framework". Pp. 61-82. In
Augmented Reality: Reflections on Its Contribution to Knowledge
Formation, edited by José Maria Ariso. Berlin: De Gruyter.
Weichert, F. Bachmann, D. Rudak, B., & Fisseler, D. 2013. "Analysis of the accuracy
and robustness of the leap motion controller". Sensors, 135., pp. 6380-6393.
Whitelaw, M. 2012. "Transmateriality: Presence Aesthetics and the Media Arts" In
Ulrik Ekman, ed. Throughout: Art and Culture Emerging With Ubiquitous
Computing. MIT Press, 2012, pp. 223–236.
Whitelaw, Mitchell. 2004. Metacreation: art and artificial life. Cambridge, MA: The
MIT Press.
Winograd, Terry. and Flores, F. 1986. Understanding computers and cognition: A
new foundation for design. Bristol: Intellect Books.
Witzgall, Susanne. 2016. “Overlapping Waves and New Knowledge: Difference,
Diffraction, and the Dialog between Art and Science." Pp.141-152. In
Recomposing Art and Science: artists-in-labs.
edited by Jill Scott and Hediger, I. Walter de Gruyter GmbH & Co KG.
Woodward, Susan L., 2009. Introduction to Biomes. Santa Barbara, CA: Greenwood
Press.
Wright, Rewa. 2013. “Exploring the responsive site: Ko maungawhau ki runga.”
Cleland, K., Fisher, L. & Harley, R., Proceedings of the 19th International
Symposium on Electronic Art, ISEA2013. Sydney: Australia. Retrieved from
http://hdl.handle.net/2123/9700
155
Wright, Rewa. 2014. “From the bleeding edge of the network: Augmented reality and
the software assemblage.” Pp 185-193. In Post Screen: Device, Medium and
Concept, edited by Helena Ferreira and Ana Vicente. Lisbon, Portugal:
CIEBA-FBAUL.
Wright, Rewa. 2015. “Mobile augmented reality art and the politics of re-assembly.”
Proceedings of the 21st International Symposium on Electronic Art.
Vancouver, B.C: ISEA International.
Wright, Rewa. 2016. Tactile Light. Mixed media software assemblage
https://www.youtube.com/watch?v=KRd2kBTRkYA. See Appendix 1 for
video documentation available on accompanying USB key.
Wright, Rewa. 2016a. “Augmented reality as experimental art practice: from
information overlay to software assemblage.” Proceedings of the 22nd
International Symposium on Electronic Art. Hong Kong, China: ISEA
International.
Wright, Rewa. 2016b. “Augmented Virtuality: Remixing the Human-Art-Machine.”
Pp 158-166. In Post Screen: Intermittence + Interference, edited by Helena
Ferreira and Ana Vicente. Edicoes Universitarias Lusofonas, Lisbon.
Wright, Rewa. 2017c. Tactile Sound. Mixed media software assemblage.
https://www.youtube.com/watch?v=alxwMb4KQSQ See Appendix 1 for video
documentation available on accompanying USB key.
Wright, Rewa. 2017d. the Wild Versions (1-4) Mixed media software assemblage.
Performance recorded at A.H Reed Park, Whangarei, Aotearoa-New
Zealand.
https://www.youtube.com/edit?o=U&ar=1&video_id=nGT0yulyXpk.
See Appendix 1 for video documentation available on accompanying USB
key.
Wright, Rewa. 2018a. “Post-human Narrativity and Expressive Sites: Mobile ARt as
Software Assemblage.” Pp. 357-369. In Augmented Reality Art, edited by
Valdimir Geroimenko. Switzerland: Springer International Publishing.
Wright, Rewa. 2018b. "Interface Is the Place: Augmented Reality and the
Phenomena of Smartphone–Spacetime". Pp. 117-125. In Mobile Story Making
in an Age of Smartphones, edited by Max Schleser and Marsha Berry.
Switzerland: Palgrave Pivot.
156
Wright, Rewa. 2018c. Tactile Signal: Agave Relay. Mixed media software
assemblage. Performance recorded at the Black Box, UNSW Art & Design,
Sydney, May 2018.
https://www.youtube.com/edit?o=U&ar=1&video_id=piIenCBZzGU.
See Appendix 1 for video documentation available on accompanying USB key.
Wright, Rewa. 2018d. Contact Zone. Mixed media software assemblage. Exhibition
dates are 19-23 November 2018, at the Black Box, University of New South
Wales, Faculty of Art and Design, Greens Road, Paddington, Sydney. Video
documentation of the rehearsal for this exhibition:
https://youtu.be/7OvRrFnxUes
Zhang, M., Zhang, Z., Chang, Y., Aziz, E.S., Esche, S. and Chassapis, C., 2018.
"Recent Developments in Game-Based Virtual Reality Educational
Laboratories Using the Microsoft Kinect." Pp.138-159. In International
Journal of Emerging Technologies in Learning iJET.(131)
Zweifel, R. and Zeugin, F. 2008. "Ultrasonic acoustic emissions in drought-stressed
trees – more than signals from cavitation?" Pp. 1070–1079. In New Phytol
(179). Retrieved 4 June 2017.
https://www.ncbi.nlm.nih.gov/pubmed/18540974
Editorial note: This dissertation has been referenced using the American Sociological
Association Style (ASA).
157
Appendix 1
Folder 1. Wright, Rewa. 2016. Tactile Light. Mixed media software assemblage.
Performance recorded at the artist's studio.
https://www.youtube.com/watch?v=KRd2kBTRkYA.
Folder 2. Wright, Rewa. 2017c. Tactile Sound. Mixed media software assemblage.
Performance recorded at the artist's studio.
https://www.youtube.com/watch?v=alxwMb4KQSQ
Folder 3. Wright, Rewa. 2017d. the Wild Versions (1-4) Mixed media software
assemblage. Performance recorded at A.H Reed Park, Whangarei, Aotearoa-
New Zealand.
https://www.youtube.com/edit?o=U&ar=1&video_id=nGT0yulyXpk.
Folder 4. Wright, Rewa. 2018c. Tactile Signal: Agave Relay. Mixed media software
assemblage. Performance recorded at the Black Box, UNSW Art & Design,
Sydney, May 2018.
https://www.youtube.com/edit?o=U&ar=1&video_id=piIenCBZzGU.
Folder 5. Wright, Rewa. 2018d. Contact Zone. Mixed media software assemblage.
Exhibition dates are 19-23 November 2018, at the Black Box, University of New
South Wales, Faculty of Art and Design, Greens Road, Paddington, Sydney.
Video documentation of the rehearsal for this exhibition:
https://youtu.be/7OvRrFnxUes
158
159
160
161
162
163