You are on page 1of 174

Mixed Reality Re-assembled: Software Assemblages at the

Edge of Control

Author:
Wright, Rewa
Publication Date:
2018
DOI:
https://doi.org/10.26190/unsworks/21084
License:
https://creativecommons.org/licenses/by-nc-nd/3.0/au/
Link to license to see what you are allowed to do with this resource.

Downloaded from http://hdl.handle.net/1959.4/61519 in https://


unsworks.unsw.edu.au on 2024-03-13
Mixed Reality Re-assembled:

software assemblages

at the edge of control

Rewa Wright

A thesis in fulfilment of the requirements for the degree of

Doctor of Philosophy

Faculty of Art & Design

October 2018
Thesis/Dissertation Sheet

Surname/Family Name : WRIGHT


Given Name/s : REWA FLEUR
Abbreviation for degree as give in the University calendar : PHD
Faculty : ART AND DESIGN
School : UNSW ART AND DESIGN
Thesis Title : Mixed Reality re-assembled: software assemblages at the edge of control.

Abstract 350 words maximum: (PLEASE TYPE)


In certain paradigms from commercial and engineering practice, migrated to media art, Mixed Reality (MR) is often
encountered as augments viewed through a screen display. Understood as both informatic and digital, augments are
supplementary content that enhance a human experience of 'reality'. My project cultivates a contrasting view of
augments as emergent via human and nonhuman processes that entangle digital as well as physical spaces. Through a
practice-based approach located in media art, this research contributes an artistic formulation – the software
assemblage – supported by a suite of techniques and methods that attempt to re-assemble MR as an expanded practice
that occurs both on and off screen.

The software assemblages produced in this research, draw upon Gilles Deleuze and Felix Guattari’s machinic
assemblage, a relational ecology of material elements organized by movement, as well as Karen Barad’s concept of
agential realism, where nonhuman matter enacts situated modes of agency. Thinking with Donna Haraway, the software
assemblage takes a diffractive approach, exploring patterns of interference in MR spaces. An analysis of selected media
art practices operates in tandem with this trajectory, investigating influential work by Golan Levin and collaborators,
OpenEndedGroup, Yvonne Rainer, Miya Masaoka, Adam Nash and Stefan Greuter, as well as Christa Sommerer and
Laurent Mignonneau.

Developing a re-figured version of MR, augments become performative as they co-emerge with my body, in media
environments that assemble living plants, hardware devices, and computational networks. Augments will be
apprehended not only as screen objects, but also as a mode of materiality. Emerging from this research, are techniques
and methods that investigate: the performative potential of augments outside of the informatic; the Leap Motion gestural
controller as a performative interface; the generation of augmented audio from the bio-electrical signals of plants; and,
the extended senses of embodiment that embroil the performer. Here, signals, augments, and bodies are manifest as
relational forces that diffract and modulate through the software assemblage. An alternative MR emerges that ripples
through physical as well as digital space. And that's when augments exceed the informatic.

Declaration relating to disposition of project thesis/dissertation

I hereby grant to the University of New South Wales or its agents the right to archive and to make available my thesis or dissertation in whole or in part
in the University libraries in all forms of media, now or here after known, subject to the provisions of the Copyright Act 1968. I retain all property rights,
such as patent rights. I also retain the right to use in future works (such as articles or books) all or part of this thesis or dissertation.

I also authorise University Microfilms to use the 350 word abstract of my thesis in Dissertation Abstracts International (this is applicable to doctoral
theses only).
2

ORIGINALITY STATEMENT

‘I hereby declare that this submission is my own work and to the best of my
knowledge it contains no materials previously published or written by another
person, or substantial proportions of material which have been accepted for the
award of any other degree or diploma at UNSW or any other educational
institution, except where due acknowledgement is made in the thesis. Any
contribution made to the research by others, with whom I have worked at
UNSW or elsewhere, is explicitly acknowledged in the thesis. I also declare that
the intellectual content of this thesis is the product of my own work, except to
the extent that assistance from others in the project's design and conception or
in style, presentation and linguistic expression is acknowledged.’

Signed ……………………………………………..............

Date ……………………………………………..............
3

List of publications resulting from this research

Small sections from the following publications may be included in this Dissertation:

Wright, Rewa. 2014. “From the bleeding edge of the network: Augmented reality and
the software assemblage.” Pp 185-193. In Post Screen: Device, Medium and
Concept, edited by Helena Ferreira and Ana Vicente. Lisbon, Portugal:
CIEBA-FBAUL.

Wright, Rewa. 2015. “Mobile augmented reality art and the politics of re-assembly.”
In Proceedings of the 21st International Symposium on Electronic Art.
Vancouver, B.C: ISEA International.

Wright, Rewa. 2016a. “Augmented reality as experimental art practice: from


information overlay to software assemblage.” In Proceedings of the 22nd
International Symposium on Electronic Art. Hong Kong, China: ISEA
International.

Wright, Rewa. 2016b. “Augmented Virtuality: Remixing the Human-Art-Machine.”


Pp 158-166. In Post Screen: Intermittence + Interference, edited by Helena
Ferreira and Ana Vicente. Edicoes Universitarias Lusofonas, Lisbon.

Wright, Rewa. 2018a. “Post-human Narrativity and Expressive Sites: Mobile ARt as
Software Assemblage.” Pp. 357-369. In Augmented Reality Art, edited by
Valdimir Geroimenko. Switzerland: Springer International Publishing.

Wright, Rewa. 2018b. "Interface Is the Place: Augmented Reality and the
Phenomena of Smartphone–Spacetime". Pp. 117-125. In Mobile Story Making
in an Age of Smartphones, edited by Max Schleser and Marsha Berry.
Switzerland: Palgrave Pivot.
i

Table of Contents

Acknowledgments .................................................................................................................. iv

List of figures ......................................................................................................................... v

Introduction ........................................................................................................................... 1

What is software? 5

What is a software assemblage? 6

Modes of assembly: a chapter outline 11

Chapter 1. From informatic overlay to software assemblage ............................................... 17

The informatic overlay in AR/MR 18

Approach one: technical and engineering paradigms 19

Approach 2: remediation and metaphor between real and virtual 25

Approach 3: AR/MR through media art practice and scholarship 31

Materialist approaches to the interface 40

Materialist thinking through software studies and media art 42

Chapter 2. Augments, apparatus, intra-action .................................................................... 49

Interrogating 'real' and 'virtual' 51

Relational emergence in the Augmented Hand Series 55

From static to dynamic: Leap Motion as a performative interface 56

Avoiding the logic of control 62

The corporeal body as digital 63

Shifting relations in the software assemblage 66


0
Acknowledgements

A node is incomplete without its distributed network. Thanks to my


family, for the nurturing home environment in which I thrive. Simon, my
more than muse, Mereana and Charlie, my smallest best friends, Jane,
whose determination I share, Ted, a Ngāpuhi warrior, and Sue and Carrie,
forever kind. My sincerest thanks to the diverse thinkers and makers who
have relationally informed the conceptions contained in this Doctoral
research. Especially, I would like to extend my sincerest gratitude to my
guiding light in the academic world, Professor Anna Munster, who has
extended my perceptions of what the software assemblage might do,
beyond what I imagined. Under her skilled mentorship I have become a
disciplined researcher. As well, my second supervisors: the brilliant artist
Petra Gemeinboeck, whose acute sense of a potentialized virtual also
embraces the visceral; and the talented sonic creator Ollie Bown, for
supporting my project and offering careful feedback. I would also like to
tip my hat to Ross Harley and Peter Shand. Winding the clock back a little
more, I wish to thank my first academic mentor, Sandra Furness, my
debating coach and art history teacher, who lit the fuse that is still
burning. I am grateful for the intellectual and artistic nourishment given
by the conference/exhibitions/publications I have been a party to, notably
the collegial environments of ISEA International, the Post Screen Festival,
the Mobile Innovation Network Australasia, as well as the Immediations
and Algorithmic Cultures groups led by Professor Munster at UNSW Art
& Design. This thesis would not have emerged without the invaluable
assistance of the Australian Post Graduate Award, and the institutional
backbone provided by the Faculty of Art & Design at the University of New
South Wales, whose strong research culture has generated another seed.
v

List of Figures

Editor’s note: In the public version of this thesis, due to copyright issues,
Figures 1-16 are unavailable. Figure numbers have been retained for
consistency in the remainder of the thesis.

Fig. 1. Milgram and Kishino’s Reality-Virtuality Continuum. Image


restricted.

Fig. 2. Microsoft’s Hololens, marketing image, January 2015. Image from


the official launch of the Hololens, published in Time magazine, August 3,
2016. Retrieved from http://time.com/4436606/microsoft-hololens-
release-date-price/ (accessed 2 November 2016). Image restricted.

Fig. 3. Screenshot from the HP Reveal app. taken by the artist. Image
restricted. Fig. 4. Giant hands projected across bodies in the sand. Image
restricted.

Fig. 5. People in the sand box, the source of the hands.

Fig. 6. Participants interact with the Augmented Hand Series (2014),


Cinekid Festival, Amsterdam. Image restricted.

Fig. 7. The Leap Motion gestural controller, marketing image. Retrieved


from https://leapmotion.com. Image restricted.

Fig. 8. Screen shots depicting the Leap Motion ‘Blocks’ example. Image
restricted.

Fig.9. the Leap Motion in recommended Desktop use.

Fig. 10 and 11. Leap Motion gestural controller as hand held, two
examples. Image: the artist.

Fig. 12. A commercial tagline of the Leap Motion. Retrieved from


https://developer.leapmotion.com/vr-setup/ (accessed 7 July 2017).
Image restricted.

Figs. 13-15. Stills from Hand Movie (1966) showing Rainer’s micro-
gestures. These images are screen captures from an online video copy of
the film, retrieved from https://coub.com/view/80y37 (accessed 20
December 2016). Image restricted.
vi

Fig. 16. Tactile Light. Micro-gestures diffract across plants. Image: Simon
Howden.
Fig. 17. Hand avatars blend with environment. Image: the artist
Fig. 18. Environment view. Hand avatars blend with wheatgrass
structures. Image: the artist. Fig. 19. Hand avatar projected to wheatgrass.
Image: the artist.
Fig. 20. Hand avatars tend to abstraction. Image: the artist.
Fig. 21. Grass lattice alongside my performative gestures. Image: the
artist.

Fig. 22. Sitting on the wheatgrass sheet, activating piezo sensor and Leap
Motion in tandem. Image: Simon Howden.

Figs. 23 and 24. Swishing the grass. Movement left to right. Image: Simon
Howden. Fig. 25. MR screen capture from Unity of hand avatars. Image:
the artist.
Fig. 26. On the LCD display, the chopped hand in stark relief. Image: the
artist.
Fig. 27. Red dot shows the Wild Versions location. Image: Google Maps.

Fig. 28. A location shot at A.H Reed Memorial Park, Whangarei. Image:
the artist. Fig. 29. Still image from Wild Version 1. Image: the artist.
Fig. 30. Screen image from Wild Version 2. Image: the artist.
Fig. 31 and 32. Sequential screen images from Wild Version 2. Image: the
artist. Fig. 33. Screen capture from the Wild Versions 4. Image: the artist.

Fig. 34. Screen capture from the Wild Versions 4, showing light
diffractions. Image: the artist. Fig. 35. Sketch of my homemade mobile
system: webcam, laptop, iPhone 8 plus (the 'frame')

running Unity Remote app. Image: artist’s workbook.

Fig. 36. Screen capture from Leap Motion/Vive display. Image: the artist.

Fig. 37. Diagram of the Electromagnetic Spectrum. Credit: NASA’s


Imagine the Universe.

Figs. 38 and 39. Signal path diagrams for Yucca Relay and Agave Relay.
Images: artist’s workbook.

Figs. 40 and 41 . Screen images via HMD in Yucca Relay performance.


Image: artist’s workbook. Fig. 42. Yucca Relay's performative interfacing.
Image: Simon Howden.

Fig. 43. Yucca Tree with white electrodes from MIDI Sprout at bottom
vii

right. Image: Simon Howden.

Fig. 44. Tactile Signal: Yucca Relay performance, meshless avatars.


Image: the artist. Fig. 45. Tactile Signal: Yucca Relay performance,
meshless avatars. Image: the artist. Fig. 46. Agave Relay, screen capture
from video. L-R:HMD view/environment view. Fig. 47. Technique 1.
Image: Simon Howden

Fig. 48. Technique 1. MIDI. Image: the artist. Fig.49. Technique 2. Image:
Simon Howden Fig. 50. Technique 2. MIDI. Image: the artist. Fig. 51.
Technique 3. Image: Simon Howden. Fig. 52. Technique 3. MIDI. Image:
the artist.

Fig. 53. Technique 4. Image: Simon Howden.

Fig. 54. Technique 4. MIDI. Image: the artist.

Fig. 55. Screen capture from HMD with my hand under the avatar. Image:
the artist.

Fig. 56. Agave Relay, screen capture from HMD, with contorted avatar.
Image: the artist.

Fig. 57. Green Wall Panel on my front porch, March 2018. Image: the
artist

Fig. 58. 3Dmodel of visitor experience showing reactive plants and


webcam view on screen. To be installed at Black Box, 19-23 November
2018.

Fig. 59. Contact Zone, ‘agave modulation’ segment. Image: the artist.
Fig. 60. Contact Zone, ‘agave modulation’ segment, HMD view, infrared
signal captures my hand

(far left) as well as augments. Image: the artist.


Fig. 61. Contact Zone. HMD view as performer moves from agave to green
wall. Image: the artist. Fig. 62. Contact Zone environment. L-R: LCD
screen, performer, green wall. Image: the artist. Fig. 63. Contact Zone.
LCD screen capture from hand held Leap Motion. Image: the artist.

Fig. 64. Contact Zone. HMD screen capture from Leap Motion/Vive
apparatus. Temporally synchronous with Figs. 62 and 63. Image: the
artist.
1

INTRODUCTION

On a bright day in 2013, walking up an urban mountain to geo-locate an augmented


reality (AR) project, I ruminated on how my networked arrangement might be thought
(Wright 2013). Was it an installation, was it an assemblage, was it even ‘fine art’?
Wrestling with a patchy wireless internet connection, the endless glare of the sun on
my smartphone screen, and the sobering realisation my code required further work, I
felt the affective force of the collision between software, on the one hand, and
environment, on the other. The contingent movement between these two nonhuman
forces, would decide whether my artwork operated as envisaged. The questions that
arose that day, concerned not only what combinations of the physical environment
and the digital screen my networked art was calling forth, but also what an ‘augment’,
in the context of this type of art, even was. They began a line of inquiry that initiated
this doctoral research a year later.

In a widely accepted definition of augments, Ronald Azuma argues that they are data
objects interactive in real time, and that they register across three dimensions
(1997:355). Augments can either register as a data overlay on a ‘virtual’ world or on
the ‘real’, and this changes the particular category of mixed reality (MR) they belong
to: in augmented reality (AR) augments rest upon a ‘real’ world; in the converse case,
augmented virtuality (AV), ‘real’ content is inserted in a virtual environment (Milgram
and Kishino 1994:1321). However, through commercial and industrial practice, the
notion of the ‘augment’ has also become fused to the digital and informatic. In a widely
accepted definition that has migrated to media art practice, augments are often
considered as informatic overlays, content ‘overlaid upon a visual representation of
the physical’ (Lichty 2014:99). An underlying assumption is that augments are always
digital. In narrow technical terms, this could be considered correct, since a
screen/surface is the mechanism by which the digital is displayed to a user/participant
in most types of MR experience. Yet, imbricated with a natural environment, such as
the location of my artwork, recourse to the digital as source offers only a partial
explanation. There, the environmental space came into play as a relational force,
colouring not only my experience of the digital, but also its capacity to materially act
through a technical network. Instead of recourse to the existing approaches and
2

descriptions1, I began to investigate the notion that augments – inflected by a full


spectrum of materialities present at the mountain site – should be considered in
relation to the surrounding physical space. Emergent through an artwork/event that
implicated physical ‘reality’ as well, augments can no longer be apprehended as purely
digital, since the gestures that lure them to emergence are imperceptibly linked to
bodies of users/performers, the computational network, and the natural environment.
What the situatedness of that environment suggested, was that I needed an approach
to augments that explored different spaces, techniques and terms for mixing physical
and digital realities. Such an approach would acknowledge that simply knowing what
data elements technically make up a digital augment, does not reveal what an augment
might do. If augments emerge more through arrangements that move relationally with
one another, then a range of other aspects must also be taken in to account such as:
what other materialities augments might move relationally with; what gestures assist
in materialising them as MR; and, how might augments shift and adapt through
environmental as well as networked conditions.

Through the research that has constituted this doctoral program, I have come to
understand digital augments as performative: they are not discretely formed prior to
interaction with a computational network. Instead, they emerge with the networked
assemblage, as a matter of contact between all the elements in that system, whatever
it might be. Elaborating on relational thinking in this dissertation, augments will be
re-conceived as a mode of materiality rather than as purely digital objects. It is my
speculation that such an approach will allow a more affective set of relations to develop
for MR. But before I trace what such a material approach to augments might do, I will
broadly sketch the diverse and innovative artistic field of MR.

When I began augmenting real world environments at the end of 2012, in work that
preceded this doctoral research, MR was already a burgeoning area for arts practice
and research. Diverse pieces included pioneering forays such as Myron Kreuger’s
Videoplace (1975-1989)2, to playable media that blurs physical and digital space such
as Can you see me now? (2001)3 by Blast Theory, to live performances where
augments and hand gestures unfold in real-time like Tmema’s Manual Input Sessions
(2004)4 by Golan Levin and Zac Lieberman), to propositions in online environments
like Becoming Dragon (2008)5 by Micha Cárdenas. As well, mobile propositions that
used smartphones to activate augmented ecologies were about to enter the field, a
3

selection of which will be discussed in chapter 1. Artistic works that blend physical and
digital space, underscore MR as a vibrant meshwork of experimental practice. Artists
have built upon and added to the many developments in computer vision, tracking,
and alignment, emanating from engineering and computer science, and they have
deployed these in new ways that diversely mix ‘realities’.

Addressing the spirit of inventiveness flowing through those experiments, my


contribution attempts to situate augments as performative materialities that can be
gathered up within an emergent ‘arrangement’ I have called the software assemblage.
The practice that helps to facilitate the software assemblage, I have termed
performative interfacing. This is generated by entanglements between phenomena
that emerge by way of gestural apparatuses and carefully configured networks of
materials – such as the signals generated by code and plants – that ripple across
analogue and digital, as well as physical and screen-based sites. Performative
interfacing is developed by applying choreographic techniques to corporeal gestures,
that initially attend to the hand’s micro-gestures (articulated by fingers, thumb and
palm, motivated by arm and wrist), yet in my later performance pieces Tactile Signal
(2018) and Contact Zone (2018) engage the relational movement of my whole body
as it traverses the exhibition space.

Furthermore, incorporating plant matter as a co-compositional element is an under


researched area, and one that raises issues pertaining to the ‘natureculture’ approach,
pioneered by Donna Haraway.6 The research axis of bodies, plants, and data is often
neglected in the mainstream contexts in which MR technologies are embraced. In my
software assemblages, gestures generate patterns of (signal) interference across
material arrangements with data and plants. Incorporating living plants as a
materiality in my artwork, draws in the initial concept of the software assemblage as
an entity that operates relationally with its environment. I will be using hand and body
gestures to generate augments in screen space, and as well these gestures will be in
tactile contact with living plants. To perform the gestural tracking, I utilise the Leap
Motion – an ‘off-the-shelf’ interface – which I re-imagine using specially developed
techniques that skew its normative use from controller of data to more performative
enactments.
4

Through the techniques of performative interfacing developed in this research –


outlined in detail from chapter 2 – augments will emerge in arrangements that
articulate their affective potential to co-compose in relational systems. However, this
research does not utilise performing agents or artificial intelligence to generate
nonhuman movements. Rather, the performative approach taken here is through
connections with the moving/gestural body in the MR situations I have designed;
specifically, using the Leap Motion interface to connect and reconfigure my hand
gestures as digital. Gestures are important vectors for generating performativity in
augments as materials, and in this research they will be linking different modes of
matter with one another, such as the vibrant living matter of plants with the affective
movements of digital code. However, as shall be explicated in chapters 3 and 4, the
materialities developed as augments are not only generated by code, but are also bio-
electrical, derived from the voltage emanating from living plants. Using a capacitive
touch sensor attached to a plant by electrodes, I will be modulating its bio-electrical
signal, a process that will be investigated as form of augmented audio.

I activate the Leap Motion as a performative interface in two configurations – hand-


held and head-mounted, emphasising the shifting alignments between my body, living
plants, and custom-made software created in the Unity SDK.7 Utilising the Leap
Motion interface as both handheld and head-mounted, I investigate how these
different configurations impact on the emergence, entanglement, and circulation of
augmented material. Strategies for performative interfacing in MR, will assist with
moving beyond the notion that we are ‘interacting with interfaces’ to access content,
as more conventional accounts of digital augments as informatic overlays would have
us believe. My research ‘problem’ departs from the issue of the informatic overlay as
the mainstay of AR/MR practice, the limits of which will be detailed in chapter 1.

In any process that strives for interfacing through code, software takes on an
important yet diffuse role that encompasses the execution, circulation, and adaptation
of code as a material flow across elements. Hence the need to discuss software less as
a command hub and more as an assemblage. As we shall soon see, software
assemblages set in motion oscillatory processes of interfacing that pass through and
between thresholds of the performing/interfacing body, the digital, and the organic.
But before I elaborate on the question of what composes a software assemblage, the
question of what software is in the context of this research must be addressed.
5

What is software?

In choosing the term ‘software’ as prefix for my assemblage formulation, I am not


suggesting that software has taken ‘command’, or that it might be the contemporary
equivalent of electricity or the combustion engine, although a valid case for this has
been made (Manovich 2013:21). This research proposes, after Simon Penny (2017:6)
and in line with Wendy Hui Kyong Chun (2011) and others, that we reconsider
software as relational, and sideline the ways in which software is inscribed by
neoliberal interests and concerns:

Our interactions with software have disciplined us, created certain expectations
about cause and effect ... [and] fostered our belief in the world as neoliberal: as
an economic game that follows certain rules. The notion of software has crept
into our critical vocabulary in mostly uninterrogated ways (Chun 2011:92).

To counter the ‘economic game’ described by Chun, participants in software cultures


need to interrogate the mechanisms that shape software in certain socio-economic
ways and find different responses. The analysis advanced in my research, suggests that
such a response might manifest from treating software as ‘craft’ (see Dyson
1998:1014)8, which would produce software that is bespoke and artisanal.
Approaching software as craft, also bears ‘modest witness’ to the ‘heroic’ narratives of
technoscience (Haraway 1997:2)9 that might attempt to suppress more dynamic
versions of software than are offered by cause-effect and rule bound systems.

To elucidate a more affective performative side to digital augments, I will be utilising


the same core combination of software and hardware, whose key components – the
Unity SDK and the Leap Motion gestural interface/Leap Motion SDK – remain
consistent throughout the research. Unity, a gaming engine, falls in the category
distinguished by Lev Manovich as ‘cultural software’; that is, software used to make
other software, for example, for aesthetic, entertainment, or social purposes
(Manovich 2013:21). The tracking system I use, the Leap Motion gestural interface,
more or less accurately follows the human hand as a dynamic physical form (Weichert,
Bachmann, Rudak and Fisseler 2013; Guna, et. al. 2014). In the software assemblage,
I pair the Leap Motion gestural interface10 with the Vuforia AR extension11 deployed
from inside the Unity SDK. But as we shall see throughout the research that develops
6

and is discussed in this dissertation, exactly where the ‘software’ is located becomes
more complicated as it encounters materialities beyond computational hardware.

What is a software assemblage?

Created in resonance with Gilles Deleuze and Felix Guattari’s conception of the
machinic assemblage (1987) which is, variously: a ‘surface of stratification’ lying
between two layers of strata (40); and, a ‘machinic assemblage of bodies, of actions
and passions, an intermingling of bodies reacting to one another’ (88). Deleuze and
Guattari located the agential drive of their machinic assemblage in its capacity to
attract, compose, and re-assemble heterogeneous material flows such as those
comprising people, objects, or energies. Assemblages mesh existing materials together
in unexpected ways, allowing unique connections to emerge in process. The machinic
assemblage emphasises dynamic configurations that iterate differently to produce
temporary arrangements of matter, generating a ‘unity of composition’ out of
‘molecular materials, substantial elements, and formal relations or traits’ (49). It does
not generate material formations that follow an already constituted model nor does it
pre-determine what kinds of materialities emerge from changes in the organisation of
matter:

We will call an assemblage every constellation of singularities and traits


deducted from the flow—selected, organized, stratified—in such a way as to
converge (consistency) artificially and naturally; an assemblage, in this sense,
is a veritable invention (Deleuze and Guattari 1987:406).

The machinic assemblage is not simply a novel and inventive structure: it proceeds
from the notion that materiality is emergent in its own shifting trajectories, with
assemblages also having the capacity to re-assemble elements through self-organising
processes. Drawing on the machinic assemblage, the software assemblage is a dynamic
relational system that can facilitate the complex and mutual interrelation of both
physical12 and digital materialities.

To help unpack these ideas about assemblage, that see it as both indeterminate and
an operable material system, I have turned to the concept of ‘agential realism’ found
in the work of Karen Barad. Challenging the notion – from humanism and other
dualist modes of thought13 – that human agency is observationally set apart from the
7

nonhuman, she instead describes ‘human participation within nature’ as ‘agential


reality’ (1996:176). She describes agential realism as:

... an epistemological, ontological, and ethical framework that ... provides a


posthumanist performative account of technoscientific and other
naturalcultural practices (Barad 2007:32).

A quantum physicist as well as a philosopher, Barad does not consider that nonhuman
matter is inert, passively awaiting a human hand to provide the agency needed for it
to take shape. Rather, matter is ‘produced and productive, generated and generative’,
activated processually by its own quantum potential (2007:137). Through her agential
realist framework, she disputes the boundary between human and nonhuman forms
of matter, advancing a mode of critical posthumanism that de-centres human agency
by acknowledging the multi-valent agencies of the nonhuman, as a dynamic collection
of entangled forces. To explicate a perspective that advances the software assemblage
approach to MR, human agency must be unwound as the privileged structuring force,
so that consideration might be paid to the transformations between all kinds of matter,
on and off screen. In my research, the agential realities engaged belong to myself as
performer, as well as to the living plants and the shifting movements of code that will
both become intra-active matters/materials that re-assemble the making of mixed
physical and digital space as indeterminate events in a shifting ecology of entities and
relations. The ‘matters’ of code and plants, are seen to performatively enact their
situated and conditional forms of agency, manifest as practices of signal that co-
compose the work. Nonhuman matter is explored for its affective potential to
relationally transform other entities with which it makes contact. In the software
assemblages in this research, such a conception of matter will give rise to nonhuman
agencies – signaletic, computational, and environmental – that beckon an alternative
MR that is at the edge of control, rather than pre-determined as an executable
sequence of events.

Barad’s agential realism pays attention to the actual movements between types of
matter in the world which she terms intra-actions, the structures or apparatuses that
channel matter in a particular configuration, and an understanding of diffraction.14
Such patterns are interferences that matter generates as it passes across bodies or
objects.15 Intra-action is a nuanced alternative to the somewhat programmatic notion
8

of interaction, which in recent years has been limited by a common acceptance as a


mode of interfacing in which an actual space of contact – the interface – becomes the
zone for negotiation between entities. Barad states:

The neologism ‘‘intra-action’’ signifies the mutual constitution of entangled


agencies. That is, in contrast to the usual ‘‘interaction,’’ which assumes that
there are separate individual agencies that precede their interaction, the
notion of intra-action recognizes that distinct agencies do not precede, but
rather emerge through, their intra-action (2007:33).

This processual view of relations in which matter engages in transformational


processes that are ongoing and dynamic of its becoming, is different from ‘interaction’
(Barad 2007:141). As a process, intra-action emerges through the agential realism of
entangled human and nonhuman agencies. For Barad, interaction takes for granted
that there are entities already constituted outside their relations of engagement.

Of course, interaction has many more detailed definitions, depending upon the
specific field under discussion. For example, in the field of Human-Computer
Interaction (HCI), interaction is ‘the study of the way in which computer technology
influences human work and activities’ (Dix 2009) 16, while in art history, ‘interactive
media’ is sometimes considered as an ‘outcome of the history of the human/machine
relationship that goes back to the industrial revolutions ... ‘ (Huhtamo 2004:2). In
media design, interaction is figured by an operation (such as touching a screen) that
performs the function of moving an event forward (such as starting a data system). In
all these understandings, a forward or linear movement is suggested, where
interactivity figures at the functional meeting point between human and machine.
Interaction affords a structured entry point to a more broadly based engagement that
might articulate social and cultural forces as well. Intra-action, however, is not figured
by a linear movement, but an entangled and omni-directional one, where different
modalities of matter generate phenomena that re-draw material boundaries.

Barad’s approach via intra-action, allows a re-working of human-data relations


beyond the discrete bodies pre-supposed by commercial and industrial paradigms. In
my practices of performative interfacing – elaborated from chapter 2 onward – intra-
action operates in tandem with my alternate version of MR, as a name for the process
9

that allows distinct and differentiated material phenomena to emerge through the
relations set in motion by the various software assemblages in this research.

As well, to develop the idea of agential realism more precisely within my research
practice in media art, I have extended Barad’s argument to include digital matter such
as pixels, data, code, algorithms. This affords a view of augments as data entities with
a material existence and vitality, which become in relation with code and algorithmic
procedures. These ideas are also supported by the dynamic materialist approaches
from theorists such as Adrian MacKenzie (2006), Anna Munster (2006), Brian
Massumi (2011), and Erin Manning (2013). When digital matter is considered as
dynamically organised, it can also be discussed as affective. Melissa Gregg and Gregory
Seigworth in their seminal edited text the Affect Theory Reader (2010), provide a
nuanced and non-prescriptive sense how affect permeates the full spectrum of worldly
encounters:

Affect can be understood ... as a gradient of bodily capacity—a supple


incrementalism of ever-modulating force-relations—that rises and falls not
only along various rhythms and modalities of encounter ... an incrementalism
that coincides with belonging to comportments of matter of virtually any and
every sort (2010:2).

Gregg and Seigworth identify affect without fixing it in a static definition that might
overly determine what affect might be. They suggest that affect is very much a quality
immanent to most situations and encounters, both human and nonhuman. Affect is of
the body, but also incorporeal: it is active, ephemeral, tonal, textural, expressive and
felt. Critically, it modulates through bodies and situations, contingent and extensive,
it moves with ‘matter of virtually any and every sort’.

Drawing on Baruch Spinoza17, Deleuze and Guattari ask what affects a body is capable
of ‘at a given degree of power’, where power is a relational force in a diagram
(1987:256). Implying ‘an enterprise of desubjectification’, affects are a relational force
that loosens ingrained subjectivity, and beckons bodies toward new embodiments
(270). Bodies have affective capacities, and bring these to bear in assemblages, such
as those generated in this research. For my performing body, a recurring theme will
be the affective potentials it lures – and are lured from it – when performatively
10

interfacing with nonhuman matter and materials. As well, mobilising Karen Barad’s
work concerning the agency of nonhuman phenomena, I will be fielding an elaborated
definition of ‘bodies’, one that reaches outside of the human sphere. In the Tactile
Signal, and Contact Zone performances – discussed in chapter’s 3 and 4, respectively
– living plants will be considered for their affective potential as nonhuman ‘bodies’
that co-compose the software assemblage through their agentially emitted bio-
electrical signals. Allied with this thinking, through materialist approaches to code and
software, I will be articulating an approach to data entities that considers the relations
between physical and digital bodies and the differentiated modes of embodiment they
enact when re-assembled by the software assemblage.

Thinking with these relational understandings of bodies, affect and code – that
themselves think with notions of assemblage – will assist in unlocking augments as
materialities that emergent through movement, rather than static and pre-formed as
informatic overlays. My discussion of the performative relations between
computational forces, the body, and the living plant matter will be further informed
by Donna Haraway’s diffractive approach:

My invented category of semantics, diffractions, takes advantage of the


optical metaphors and instruments that are so common in Western philosophy
and science. ... What we need is to make a difference in material-semiotic
apparatuses, to diffract the rays of technoscience [to] get more promising
interference patterns on the recording films of our lives and bodies.

(Haraway 1997:16)

The software assemblages developed through my research, address Haraway’s


challenge of making a difference to ‘material-semiotic apparatuses’, by creatively
mobilising the concept of diffraction. Here, I have considered diffraction as a critical
and experimental practice that inspires the performative luring of technological
apparatuses away from their intended uses. With Haraway, Barad is also concerned
with diffraction, and has elaborated on the notion that apparatuses are actually
diffractive mechanisms that ‘agentially produce "objects" and "subjects" in a changing
relationality’ (2007:93). In my re-worked version of MR, this will imbricate
apparatuses in unusual design trajectories and more complex configurations than are
11

advanced by the dominant commercial, industrial and entertainment applications,


surveyed in chapter 1. This re-worked configuration of MR is explicated from chapter
2 onward in relation to the Leap Motion gestural interface, where diffraction is enlisted
as a strategy to assist in re-purposing that device away from its intended function in
computer science. Processually, I will be re-working hardware apparatuses and their
software assemblies through gestural interventions that prehend new patterns of use.
I will be suggesting that a conventional approach to hardware – such as the Leap
Motion gestural interface– is connected to repetitive patterns of use that restrict a
more creatively inventive conception of digital augments. A diffractive lens will also
afford a re-framing of corporeality, the digital, and plants as they emerge, entangle,
and contest material thresholds in the software assemblages presented in this
research.

Modes of assembly: a chapter outline

Discussion will shift across theory and practice, where allegiances will be made
between philosophers, scientists and artists, whose thinking and making might assist
with the performative and processual re-configuration of MR via the software
assemblage formulation. Chapter 1 will explore current dominant models of MR
design. I will articulate what I term the informatic overlay approach, linking that to
the notion that the display screen is a mechanism that functions either as a window or
a mirror to the technological virtual.18 Through literature and practice in the related
fields of computer science research, engineering, entertainment, gaming, and media
art, I will take up the question of why the current technologies of commercial and
industrial MR should be encouraged to lean toward more performative practices.

Through practice-based research from chapter 2 onwards, I unfold my alternative to


the informatic overlay approach: the software assemblage.19 Here, my specific version
of MR weaves together three core materialities: code, living plants, and the
performing body. Two of these materialities involve nonhuman forces – code and
plants – however these are not considered to be subjected to human control. Rather –
supported by certain currents in materialist thinking, as well as quantum physics and
critical posthumanism – code and plants are apprehended as expressing (under
certain restricted conditions) agential forces that might mutually co-compose the
software assemblages in this research. Furthermore, as this research progresses, we
12

will be discussing augmented materialities more than digital augments. This is


because – by chapter 3 – augments will not only be digital: they will also be composed
of the bio-electrical signals from living plants, offered as a mode of augmented audio.

Inspired by Deleuze and Guattari’s conception of matter as dynamically re-assembled


through movement and other forces of attraction – such as scale, flux, and intensity –
augmented materialities are composed of shifting and heterogeneous matter, drawn
from a contiguous material flow. Here, a 'material flow’ is propagated by the relations
between various modes of matter as they emerge through the software assemblages in
this practice. Examples include signals such as the infrared, bio-electrical, and of
course, the digital, as materials that flow through each iterative artwork. Articulated
through code, as well as bio-electrical and infrared signals, augmented materialities
will be generated through processes of performative interfacing. Moreover, they are
conjunctive, layered entities: they are not discretely formed objects that exist prior to
contact with one another; critically, they are not only digital, and resist the informatic.
Through a re-conception of augments outside of the purely digital, we shall see that
augmented materialities are not only performative, but they can be affective as well.
Augmented materialities will be investigated as co-composing the re-assembled
relations of my alternative version of MR, alongside the gestures of my performing
body.

Chapter 2 examines the pivotal conception of performative interfacing, arguing for: a


non-informatic approach to digital augments; a re-worked view of apparatus that
accounts for the corporeal as well as the technical; and, the notion of intra-action as
an extensive concept and practice that engages performativity in the software
assemblages discussed. With reference to the seminal artwork Loops (2001) by the
OpenEnded Group (Kaiser, Eshkar, Cunningham, Downie), as well as the radical
Hand Movie by Yvonne Rainer (1966) the theme of choreographic processes that
implicate hand gestures will be introduced. Through Golan Levin, Kyle MacDonald,
and Chris Sugrue’s Augmented Hand Series (2014), I explore their conception of the
human hand as open to modifications by the digital, adaptations that emerge in real-
time and perceptually challenge the participant. While my techniques of performative
interfacing involve the physical hand and digital hand avatars as screen presences, my
actions emphasise the emergent phenomena that operate off screen as well. This
thread will be taken up in two software assemblages – Tactile Light (2016) and Tactile
13

Sound (2017) – where performative movement techniques invented through non-


standard uses of the Leap Motion, will amplify my hand’s micro-gestural movements.

Chapter 3 will take the software assemblage performance outdoors, to the natural
environment of a bush reserve in Whangarei, Aotearoa-New Zealand. There, I will be
examining the potential for a non-geolocative approach to MR that pays attention to
the shifting conditions presented by the environment itself, performing with those
energies. Following on from the Wild Versions (2017), the two Tactile Signal (2018)
performances – Agave Relay and Yucca Relay – will introduce several propositions
that elaborate on the agential contribution of living plants and electrical signals to the
software assemblage. These performances will further question the key assumptions
behind conventional approaches to augmented materialities in MR. Firstly, I will
discuss performances that use a recently developed head-mounted permutation of the
Leap Motion, that ‘looks through’ the infrared camera of that device. This visual
perspective reveals the distorted materiality of the signal that actually maps my
physical hand gestures to the digital augments/hand avatars constructed in the Unity
SDK. Secondly, plants will be considered as elements that generate an alternative
approach to augmented audio. They will be harnessed for their bio-electrical signals,
whose voltage is transposed to a Musical Instrument Digital Interface (MIDI)20
sequence that I modulate with digital augments during processes of performative
interfacing.

Chapter 4, discussing my final exhibition Contact Zone (2018), will take up diffraction
as a material strategy that iteratively re-works the various gestural techniques, code
modules, and modes of signal from previous pieces, across a living ecological
environment transposed to the gallery space. My aim is to dynamically trace processes
of recursive material change as they make patterns of interference across digital and
physical sites, such as those shifts that happen as gestural data passes between my
body, the computational network and reactive plants, or that trouble the consistency
of the sonic signals passing from plants to MIDI via my hand’s micro-gestures as they
materialise digital augments. During a 15 minute performance, I will be combining
both the handheld and head-mounted permutations of the Leap Motion interface.
Embroiled in this new embodiment of body and apparatuses, my performative
interfacing in this last iteration of software assemblage research, thinks with the
14

notion of the relay: I will be both generating and composing with iterative movements
of code and signal that pass through this diffractive network arrangement.

Attempting to beckon MR in media art away from the more conventional approaches
currently being offered to artists by commercial AR/MR, my practice will argue for the
software assemblage as a valuable critical arrangement. An alternative version of MR
that productively adds to the existing field of experimental media art will slowly
emerge from the proposition that, while we might think we know what augments are,
we cannot know exactly what they will do, when explored as affective materials that
shift relations in a software assemblage.

1 Such as those offered by Human-Computer Interaction (HCI), computer science, engineering, as


well as commercial and industrial approaches, which I will examine in the following chapter.
2 Krueger developed various iterative artificial reality environments (all called Videoplace) over a

number of years in his Artificial Reality Laboratory at the University of Connecticut (1975-1989).
In its first iteration as an artwork, Videoplace was funded by the National Endowment for the Arts,
and first exhibited at the Milwaukee Art Museum (1975). Retrieved from
https://en.wikipedia.org/wiki/Videoplace (accessed 7 December 2013).
3 Can you see me now? is a collaboration between Blast Theory and the Mixed Reality Lab at the

University of Nottingham. It was first shown at the b.tv festival in Sheffield on 30 November and
1 December 2001. Retrieved from https://www.blasttheory.co.uk/projects/can-you-see-me-now/
(accessed 4 September 2012).
4 Manual Input Sessions (2004) was first performed at the RomaEuropa Festival, Rome, on 28

November 2004. Retrieved from http://www.flong.com/projects/mis/ (accessed 1 November


2014).
15

5 Becoming Dragon was a 365 hour MR performance in SecondLife. Retrieved from


https://secondloop.wordpress.com (accessed 4 July 2013).
6 Malone and Ovenden define ‘natureculture’ as: ‘ ... a synthesis of nature and culture that

recognizes their inseparability in ecological relationships that are both biophysically and socially
formed’ (2017:1).
7 Unity SDK, retrieved from https://unity3d.com (accessed 12 December 2013).
8 Dyson is speaking broadly about science as craft, but also advances a significant discussion of

software. He observes: ‘Wherever serious computing was done, young people learned to write
software and to use it. In spite of the rise of Microsoft and other giant producers, software remains
in large part a craft industry’ (1998:1014).
9 In Modest_Witness@Second_Millennium.FemaleMan_Meets_OncoMouse: Feminism and

Technoscience, Haraway reads diffractively through technoscience as a ‘modest witness’ who


travels metaphorically ‘through cascading accounts of humans, nonhumans, technoscience,
nation, feminism, democracy, property, race, history, and kinship. Beginning in the mythic times
called the Scientific Revolution, my titular modest witness indulges in narratives about the
imaginary configurations called the New World Order, Inc., and the Second Christian Millennium’
(1997:2).
10 Leap Motion SDK retrieved from https://www.leapmotion.com (accessed 9 February 2014).

Leap Motion gestural interface (2010 - present) was envisaged by David Holz in 2008 while
studying for a PhD in mathematics. When I began working with it in 2014 it was desktop use only,
however since 2017 it has been technically extended to operate as head mounted on a range of
virtual reality (VR) headsets. I have recently incorporated the head mounted use to my practice,
and this will be the primary technical of interfacing with augmented materialities in chapter 4.
11 Vuforia SDK retrieved from https://developer.vuforia.com (accessed 9 February 2014).
12 In this dissertation, a physical world is what is termed in computer science literature the ‘real

world’, since this term is highly problematic in philosophical discourse. In this thesis the data-
driven counterpart to the physical is the digital (rather than the ‘virtual’, which also has a specific
meaning in philosophy). Physical world systems, in the software assemblage formulation, include
human and nonhuman forces and materials. These are, in this research, an array of living plant
systems, as well as technical devices such a gestural interfaces and head-mounted displays.
13 Agential realism is expressly a critique of the Enlightenment’s legacy where dualism is a key

construction (1996:179-80)
14 Diffractive thinking will be discussed shortly in relation to the work of Donna Haraway, from

whom Barad takes up this mode of discourse.


15 Barad defines diffraction in everyday terms: ‘Diffraction phenomena are familiar from

everyday experience. A familiar example is the diffraction or interference pattern that water
waves make when they rush through an opening in a breakwater or when stones are dropped in a
pond and the ripples overlap’ (2007:28).
16 Dix, Alan. 2009. "Human-Computer Interaction". In Encyclopedia of Database Systems, edited

by L. Liu and M.T Özsu. Boston, MA: Springer Publishing.


17 Of the enduring Spinozan context of affect, Gregg and Seigworth (2010:3) note: ‘ Baruch Spinoza

maintained, ‘‘No one has yet determined what the body can do’’ (1959: 87). Two key aspects are
immediately worth emphasizing, or re-emphasizing, here: first, the capacity of a body is never
defined by a body alone but is always aided and abetted by, and dovetails with, the field or context
of its force-relations; and second, the ‘‘not yet’’ of ‘‘knowing the body’’ is still very much with us
more than 330 years after Spinoza composed his Ethics’.
18 In this dissertation, there are two primary modes of types of virtual discussed. the 'technological

virtual', which is the virtual as described in engineering and computer science paradigms, a purely
digital and screen based phenomena. As well, I will be discussing the 'virtual' as described by
Deleuze and Guattari, which can include, but is not limited to, the site of the screen.
19 Educator Linda Candy defines a practice-based approach to Doctoral research as: ‘an original

investigation in order to gain new knowledge, partly by means of practice and the outcomes of that
practice. In doctoral thesis, claims of originality and contribution to knowledge may be
demonstrated through creative outcomes in the form of designs, music, digital media,
performances and exhibition. Whilst the significance and context of these claims are described in
words, a full understanding can only be obtained with direct reference to the outcomes’ (Candy
2006:3).
16

20Basic description retrieved from https://en.wikipedia.org/wiki/MIDI (accessed 12 September


2018). Additionally, Selfridge-Field (1997) notes that: ‘ MIDI is now the most prevalent
representation of music, but what it represents is based on hardware control protocol for sound
synthesis’ (1997:6).
17

CHAPTER 1

From informatic overlay to software assemblage

This research considers MR as a processual entity, rather than a discrete form or


technical medium. Its interfaces are not simply dynamically engaged across the
physical and the digital but entangled with social and cultural forces. Through
attention to the potential of performative modes of interfacing via the software
assemblage, we shall explore its capacity to generate new types of intra-action, conjoin
apparatuses in interesting arrangements with code and signal, and encourage
experiences that beckon new modes of embodiment. However, this is not the
dominant paradigm for creating MR experiences in industrial or commercial settings.
There, augments are thought of primarily in a technological sense: as primarily virtual
visual elements, existing only in digital screen space, the main location where a MR
experience unfolds. This approach privileges the technical capacities and instrumental
role of the various apparatuses delivering MR, over the opportunities for new senses
of embodiment that such experiences might offer.

In the industrial or commercial settings that in recent years have become the main
locations where MR unfolds, it is generally not considered that mixings of reality and
the virtual might also occur in the physical space of the ‘real’ world. However, as this
research will contend, there is an undue emphasis on what emerges in screen space:
what is needed now is attention to MR as emerging in physical space as well. In
industrial and commercial applications, augments are pictorial structures for
delivering informatic content to a screen, an approach I will refer to as ‘informatic
overlay’. The informatic overlay approach has been problematic for the transposition
of MR from an engineering context to more culturally engaged fields, since it has
limited the role of engagements that might happen off screen – such as the spaces of
social or corporeal engagement that somewhat silently support MR as technical
medium and indeed propagate its use throughout culture.
18

In this chapter, I will trace the informatic overlay approach from its conception in
engineering paradigms and mainstream computer science research, to its adjuncts in
commercially available products that deliver MR experiences. I will relate that
approach to MR’s major forms of hardware – the smartphone and head mounted
display (HMD) – and their structuring of digital augments as data through the overlay
approach. Furthermore, I will be examining the practices and research paradigms that
deploy digital augments in commercial mediatic assemblages. Since the informatic
focus of augments is rarely interrogated as problematic, the focus of MR research has
been on developing a cadre of technical methods for embedding augmented content,
rather than on questioning the idea of the informatic itself and its manifestation as an
overlay. This chapter will outline the informatic overlay as a structure ensconced
within the Reality-Virtuality Continuum (Milgram and Kishino 1994), as well as
examining its continuing impact in the context of AR/MR. I will be analysing some of
the dominant paradigms that exemplify its use in commercial and industrial media,
before considering a selection of recent approaches from media art that trouble this
notion, extending MR in an experimental direction. The final section of this chapter
will explore some of the ways that artists have re-worked the technologies of MR
through a range of situated practices exploring the actual embodied navigation of
spaces, through methods that are capable of re-figuring augments away from the
informatic overlay paradigm. Imagining MR beyond a technical medium, will prepare
the ground for the following chapter, where performative approaches to augments will
further loosen the informatic overlay approach. Before the informatic overlay
approach can be prised away from augments as a materiality, we need to identify its
salient qualities.

The informatic overlay in AR/MR

A survey of scholarly literature and practice reveals three significant threads in the
literature and practice of AR/MR. First, found largely in the technical and engineering
literatures, augments are understood as informatic overlays that add digital
enhancements to a physical space. As we shall soon see, this approach depends on and
uses a taxonomy that regulates and classifies – in varying degrees – a ‘real’ world in
continuum with a digital ‘virtual’. The second thread is an approach that operates
through ‘remediation’ (Bolter and Grusin 1999) where AR is seen to re-embody traits
and techniques from previous media such as books and film. The third thread is that
19

of artistic interventions into these configurations of AR/MR. Such interventions skew


commercially available technology, affording a more critical or reflective examination.
Aligned with these artistic approaches, I survey some of the critical voices from
academia that have begun to analyse AR/MR outside of its engineering limits.

Approach one: technical and engineering paradigms

Thomas Caudell and David Mizell (1992) coined the term ‘augmented reality’ to
describe the visual and textual layer inflected to the heads-up display (HUD) they
adapted to display virtual information over aeroplanes manufactured at Boeing
(1992:659). There, AR was conceived as a work-enhancing strategy for optimizing
engineering and thus efficiency in manufacturing technologies. Building on that
concept of AR – as a datafied window sitting over the physical world – computer
scientists Paul Milgram and Fumio Kishino (1994) wrote the influential research paper
“A Taxonomy of Mixed Reality Visual Displays”, as a method for classifying varying
combinations of ‘virtual’1 and ‘real’. While several updates and extensions have been
attempted (see Milgram, Takemura, Utsumi and Kishino 1995; Wang and Dunstan
2011), the original article is still the most influential because it contains a reference
scale used as an instrument to measure types of ‘virtuality’. The so-called Reality-
Virtuality Continuum (RV Continuum) was conceived as a spectrum to assist in the
accurate classification of MR on screen displays. There, the category ‘virtual’ is
considered as digitally delivered screen presence, while the ‘real’ is the physical world
in all its dimensions.

Milgram and Kishino’s intention, was to remedy some of the difficulties encountered
by computer science researchers by designing a descriptive model that would situate
technical networks that were materially neither purely digital nor physical. Thus,
designers would be able to succinctly decide to what degree their prototype was either
an augmentation of the real world (AR), an augmentation of the virtual (AV), or
virtually immersive (VR).2 Milgram and Kishino state:

An (approximately) three-dimensional taxonomy is proposed, comprising the


following dimensions: Extent of World Knowledge (“how much do we know
about the world being displayed?”), Reproduction Fidelity (“how ‘realistically’
are we able to display it?”), and Extent of Presence Metaphor (“what is the
20

extent of the illusion that the observer is present within that world?”).

(Milgram and Kishino 1994:1321)

This statement highlights the primary concerns of MR configured as informatic


medium: graphics that are pictorially realistic; metaphors grant presence to the screen
world; and, a coherent knowledge system provides an indexical connection to the
‘real’. The idea that augments (as informatic overlays) should contain semiotically
meaningful content, derives from the RV Continuum, which discusses the need for a
‘presence metaphor’ linking physical and digital space. Taxonomy — a branch of
positivist science concerned with producing systems of classification — establishes the
conceptual borders of Milgram and Kishino’s article, anchoring the two modes of MR
(AR and AV) as manifest on screen displays. The taxonomy of the RV Continuum
highlights the technical capacities of a display type and the affordances these provided
in facilitating information-based ‘virtual’ space. For example, their sub-category of
‘reproduction fidelity’ assumes ‘[a] synthesising display is able to reproduce the actual
or intended images of the objects being displayed’, which are both real and virtual
(Milgram and Kishino 1994:1326).

The idea was that a high-resolution display would afford the user a more realistic
simulation of the digital augments – and thus an enhanced ‘presence metaphor’ –
underscoring the value placed on ‘realism’ in the design. My argument, however, is
that this approach offers limited potential for the user/participant to engage with the
digital (virtual) in a way that is not pre-determined by the parameters of the
informatic: privileging the informatic places the participant in the position of passive
appraisal, removing the potential to stimulate new modes of embodiment with the
digital. In subsequent taxonomies emanating also from engineering and computer
science, the content of various AR and MR spaces is indexed, framed, and categorised
according to their alignment with technologies of immersion, or lack thereof (Wang
and Dunstan 2011; Ohta and Tamura 2014).3 Yet embodiment is rarely discussed. The
literal use of the RV Continuum’s criteria in subsequent designs has led to the
somewhat problematic understanding that MR involves a linear movement from the
‘real’ to the ‘virtual’: where bodies/objects/environments are transposed from physical
space to screen space, via a video stream that combines computational elements (such
as digital augments) with the ‘real’ (a video image of the human body, for example) at
21

a high ‘reproduction fidelity’. Through such an instrumentalist approach, corporeal


and physical considerations are neglected in favour of a re-composition of the body
and spatial surrounds as digital, bounded by a technologically engineered virtual
space. While perhaps useful in an engineering sense, this scheme has delimited the
potential of MR in applications outside of engineering, such as in media art. An
alternative formulation is needed, one that pays attention to the critical operations
happening in between screen space and physical world space.

A neglected aspect – in the RV Continuum and elsewhere in engineering discourse –


is the capacity of the technical apparatus to relationally shift embodiment for its
human users. This possibility has been traditionally occluded, however a growing
number of contemporary accounts do incorporate embodied approaches to the virtual,
imported into computer science from sociological or behavioural science (such as
Dourish 2004, 2017), as well as more materially engaged approaches that decentre the
technological object, where ‘practices rather than people, artefacts, or interactions
become the focus of analysis, critique, design, and intervention’ (Pierce, Strengers,
Sengers and Bødker 2013). However, overall, through the literature and practices of
mainstream computer science research in the commercially entwined territories of
AR/MR/VR, it should be noted that matters of ‘interaction’ and ‘interfacing’ are
approached from the stance that the technology comes first. In this approach the
affective potential of bodies to act upon technology is severely curtailed. The
performativity of humans is read through the technical performance of the machine,
whose capacities promise to assist humans to transcend their corporeal forms, yet
often fail to deliver (this point is explored more closely through my software
assemblage Contact Zone in Chapter 4).

In a widely accepted technical definition from Ronald Azuma, AR is any technological


system which combines real and digital elements, is interactive in real time, and
registers in three dimensions (Azuma 1997:355). AR experience is generally framed by
a display screen, often either held by or attached to human users: for example, as a
smartphone or head worn display. These range from screens on smartphones, through
to heads-up displays (HUDs), or large screens designed to reflect human scale and
capture human movement. A sample of recent games and entertainment applications
from the commercial world illustrate how AR continues to be confined to informatic
overlay design. Wikitude (2008)4, for example, was the first application (app) for
22

smartphone and tablet to use a Simultaneous Localization and Mapping (SLAM)


algorithm to overlay three-dimensional coordinates in geographical space, through
alignment with the accelerometer and gyroscope sensors.5 Cartographic and geo-
locational information was held on a web server and transposed to appear as localised
information on the screen space of the user. Another popular and commercially
successful example from the mobile game industry, the massive multiplayer game
Pokémon GO (2016)6 invites players to collect virtual avatars (Pokémon) which battle
one another, and eventually cooperate to take over virtual bases called ‘gyms’ geo-
located in real space. In the game’s AR mode, ‘trainers’ attempt to capture Pokémon
that are visible as a layer on a smartphone screen.7 The game uses geo-location to
spawn and track the Pokémon (Juhász and Hochmair 2017), in combination with an
informatic overlay approach. In another context, Snapchat (2016) 8, overlays novelty
augments on the faces of users, such as dog, a dancing elf, or a rainbow. The
application uses a ‘features point detection’ technique to precisely locate augments —
called ‘lenses’ by the company9 — over the faces of users (Pawade, Sakhapara, Mundhe,
Kamath, and Dave 2018). A user’s face is captured through the front-facing camera of
their smartphone, placed in a ‘mirror,’ adapted through the addition of an augmented
overlay, then re-presented in the screen display as an altered image stream.

In limited ways, these more recent examples of AR engage aspects of a user’s


corporeality. It could be argued that applications like Snapchat provide new senses of
embodiment for the participant by shifting the relation between the camera’s image
stream and a second augmented image stream. Yet bodies here are highly delimited
by their interaction with a screen. Captured in this ‘magic mirror,’ the body is re-
constituted via a technical apparatus as components of computational vision, where a
digital replica of an area of the user’s corporeality endlessly loops without variation.
Additionally, many of the designs currently deployed in the mobile AR industry
proceed from the assumption that the digital screen functions like as an analogue to a
window. Wikitude is literally a map overlaying physical space; in Pokémon GO, the
smartphone screen becomes a ‘portal’ to look through. Others, such as Snapchat,
function through the trope of the magic mirror. In either paradigm, AR is focussed on
what happens within the frame of the screen.10 In summary, a range of widely-used
applications deploy an overall informatic overlay approach, where digital augments
are called from a server, pre-formed and placed as data objects within the frame of a
23

screen. The select examples cited above illustrate major uses of AR as an informatic
overlay, although many more can be found. It is outside the scope of this research to
cover the entire field; rather it is my intention to locate this approach to interaction in
AR as the lineage of an engineering paradigm, which has migrated into the media and
entertainment industry. The following section will elaborate on the engineering
techniques that make the informatic overlay possible and indicate how – in the design
of commercial products – those techniques also flow into MR as an emergent field.

The apparatus that started the HMD/HUD research trajectory in MR, was invented by
Ivan Sutherland: his Sword of Damocles (1968) took up an entire room, had functional
vision from one eye, could only be comfortably worn for a few minutes, and showed
wireframe line drawings (Sutherland 1968:757-759). Yet its radical concept– that
human vision might combine with digital content through a display worn on one’s
head – established a research trajectory for AR/MR/VR that is proliferated today
across many devices. Fundamental for this research trajectory is a pragmatic approach
that aimed for greater efficiency in task completion, helpful in military and industrial
contexts that embrace the notion that the human body would be enhanced by
augmented vision. That idea still drives much AR/MR design, yet as I shall explicate
in chapters 3 and 4 – through my Tactile Signal performances – the head mounted
apparatus itself is a highly problematized device that offers only partial perceptual
enhancements. Current commercial and industrial approaches incorporate the latest
sensing technology to convey sophisticated situated and highly contextual data aimed
at adding clarity to the digital experience (Izadi 2012, 2016; Ohta and Tamura 2014).
However, amidst this endless drive for digital verisimilitude, the reasons as to why we
might want an alternative to a seamless digital experience are less frequently
interrogated.

Engineering handbooks are replete with a wide range of tracking techniques for
AR/MR, including marker-based and image-based tracking, model targets stored in
the Cloud and geo-locational information (Carmigniani and Furht 2011; Kent 2012;
Craig 2013; Peddie 2017). These techniques have facilitated a plethora of AR games
for smartphone, with mobile AR being the largest category of commercial MR use (as
in the gamified examples mentioned above). Product launches attach data, such as a
new car, to real-world objects such as cubes, while QR codes on supermarket cereal
boxes aim to tempt buyers with embedded links to product websites. MR is built upon
24

the technical advancements of AR, yet in popular usage is something of a marketing


invention, as the tone of the literature confirms.

Writers of technical books frequently laud AR as a science fiction that has become real,
often framing their enthusiasm with an image from the film Minority Report (Peddie
2017:132), or perhaps a quote from the fictive works of popular writers such as Isaac
Asimov, Bruce Goldberg or William Gibson. Popular technology writer John Rousseau
(2017) went as far as constructing his ‘Laws of Mixed Reality’ based on Asimov’s ‘Laws
of Robotics’, interestingly, an approach that is being taken seriously by other writers
thinking along an AR-MR-technofuturist axis (Peddie 2017:17). Rousseau asserts that
MR will be a seamless mix of ambient environment and high-resolution digital
imagery, facilitated by what he vaguely terms ‘future hardware’:

Future hardware will be capable of rendering high-resolution digital


content that blends seamlessly with our environment, and devices will be small
enough to wear all the time. The complex UX challenges will be resolved, and
new interaction models will emerge along with a new computing paradigm.11

Rousseau’s comments resonate with this AR-MR-technofuturist imagining, where


wild propositions for technical schemes currently out of reach are calmly delivered as
if they are just around the corner. Yet, if we compare the above statement to a selection
of recent HUD’s, such as Microsoft’s Hololens, Magic Leap’s Magic Leap One12, and
Leap Motion’s Project North Star13 we find that in fact, most are built on the software
and hardware engineered through AR with the addition of embedded mobile chipsets.

All the HUDs mentioned above overlay augments as information on a real world, and
the only difference between these products involves the details of their technical
methods; for example, localised variations in screen resolution and refresh rate;
stereoscopic projection and tracking methods; screen material type; or, 3D
holographic overlays instead of polygonal models. However, what is already clear is
that these wearables are culturally bound to many unresolved social concerns, such as
privacy.14 From a perspective and practice that is more concerned with questions of
embodiment – such as mine – strategies that deprivilege vision are necessary in order
to re-evaluate these commercial obsessions with high-resolution displays. Such
strategies must factor in artistic methods that attempt to shift MR out of the informatic
25

overlay approach. Chapters 3 and 4 explore this potential further, investigating a


permutation on MR that operates with an infrared signal, rather than a high-
resolution image stream. While technical capacity pertains to what is possible in an
engineering sense, it need not govern the ways that practitioners deploy the medium,
since part of using media as art involves inventing new practices that challenge current
paradigms.

Approach 2: remediation and metaphor between real and virtual

Following on from the engineering and computer science research approaches


discussed earlier – the use of presence metaphors, realistic imaging, high-resolution
displays, and prior ‘world knowledge’ as supports for the taxonomy posited by the RV
Continuum – Jay Bolter and Richard Grusin’s ‘remediation’ theory (1999), has also
been important in the development of MR.15 A prominent concept throughout the
beginnings of new media theory, remediation posits historical recombinance as a
primary and defining technique of new media:

Augmented reality remediates not perspective painting, but rather the


windowed style of the desktop interface. In laying icons, texts, and images over
visible objects in the world, augmented reality frankly admits that it is a digital
medium interposing itself between the viewer and an apparently simple and
unitary physical world (Bolter and Grusin 1999:216).

So, at the foundations of AR/MR as explained via remediation, already the window
metaphor is considered essential to the medium. Following his highly influential and
still widely referenced text with Grusin, Jay Bolter referenced remediation in much of
his practice-based design research, speculating that to develop to the status of
medium, AR would most certainly need to reference earlier media predecessors. For
example, in a multi-authored article he wrote:

A user’s expectations are (implicitly and explicitly) based on their experience


with … all media forms; a lifetime of experiencing film, stage, tv, and so on
creates a starting point for their interpretation and understanding of any new
experience. Understanding, and leveraging, the shared cultural expectations of
the intended audience will allow us to create richer, more engaging, and more
understandable AR experiences.
26

(McIntyre, Bolter, Moreno and Hannigan 2001:1)

Remediation has been heavily used as an exploratory paradigm to extend AR’s


industrial boundaries into the cultural realm. The effectiveness of this can be seen by
the uptake of AR in fields such as cultural heritage (Bekele et.al. 2018), narrative
theatrical experiences (Engberg and Bolter 2014) entertainment (Barakonyi &
Schmalstieg 2005). Here, the informatic overlay tries to deliver a believable or
seamless story out of virtual material and as such, supports a broadly narrative
approach to AR design. At the same time, remediation’s conceptual maxim — that new
media always refer back to older media antecedents — also functioned to contain AR
within the engineering design approach of information and explanation dominating
the digital overlay. However, Manovich (2013:61-63) highlights the point that ‘while
visually, computational media may closely mimic other media, these media now
function in different ways’ (Manovich 2013:61). He discusses Bolter and Grusin’s
application of remediation as primarily about the appearance of new media forms in
ways that appear to show antecedence, yet points out that in terms of their function,
new media such as the digital photograph, ‘offers its users many “affordances” that its
non-digital predecessor did not’ (62). For example, its colour/contrast can be visibly
altered using algorithms, it can be easily layered with other images, or it can be added
to a three-dimensional plane such as in architectural design. In this analysis,
Manovich is essentially calling Bolter and Grusin out for applying a pre-digital concept
of media to digital artefacts. Interestingly – at least from within the early thinking on
AR/MR in media theory as well as interaction design – the tendency is to accept rather
than question remediation.

Following a mode of remediating, AR/MR/VR interfaces – such as handheld devices


and HUDs – are inheritors of the ‘window’ and ‘mirror metaphors (Bolter and
Gromala 2003; Friedberg 2006). Here, remediated content is delivered either through
a virtual ‘window,’ or a reflective ‘mirror’ enacting the presence metaphor discussed
by Milgram and Kishino (1994). An interface is primarily understood as a surface, that
is either looked through to reveal content (as in a ‘window’ metaphor) or reflects
‘doubled’ real world content (as in a ‘mirror’ metaphor).16 In both formulations, linear
perspective frames an alignment between edges of the screen’s frame and the physical
space beyond to produce an onscreen window to the ‘real’. As I will emphasise in
chapters 3 and 4, this window is in fact entirely mediated by the video stream on which
27

the augments sit: through this stream, ‘reality’ is transposed by signal.

Another uptake of AR as a remediation can be seen in Sean Morey and John Tinnell’s
recent book, Augmented Reality: Innovative perspectives across art, industry, and
academia (2017), where AR is contextualised as a mode of writing, fused to older
technologies such as the book, tracing out a new space of design for reading itself
(2017:9). The anthology collects interviews with AR practitioners across industry,
academia and art, in fields as diverse as interaction design, teaching, digital business,
electrical engineering and experimental art.17 Supporting their contextualization of AR
via writing as a historical precursor, physical copies of their book incorporate a form
of marker-based AR: readers scan images embedded in the text, to access augments
that supplement the book’s content. Scanning a book-embedded code triggers
augmented material, such as video interview clips, examples of artworks or media
products.18 Here, augments are approached as information whose content is intended
to enrich the reading experience through an intertextual narrative that passes the
reader from the physical space of the book to the digital space of the overlay. In this
scheme, the idea would be that the act of reading is composed between both physical
and digital sites, through which a conjunctive experience is produced. Yet the process
a reader goes through to access these augments is somewhat inhibitive of a dynamic
relay between the physical and digital.

Attempting to download the application, Aurasma19 to your smartphone, you are re-
directed to another app called HP Reveal, which has recently taken over Aurasma.
The content of the book has been migrated, however, so you are nevertheless able to
find the same augments. Activating the app on your phone, you are required to scroll
past various media while navigating to the publisher’s (Parlor Press) page: a cartoony
promotional poster for the movie Ironman, a salacious cover from OK! Magazine, and
some other promotional material you don’t recognise. It is apparent that Morey and
Tinnell’s book is now imbricated in a flow, whose matter is littered with the flotsam
and jetsam of marketing images. Having passed into a commercial mediatic
assemblage, the book is no longer simply a hybrid augmented form, as perhaps was
the intention of its editors. Since the print book takes a perspective that blends
industry, academia, and art, it seems likely that the idea was to produce a novel and
medium-appropriate update to the traditional book, whose design would enact the
editor’s central argument: that AR provides an enhanced design space for writing.
28

Unfortunately, as we see through the actual experience of accessing the book’s


augments, this design space has already been colonized in advance by commercial
media products, and the book is drawn into the banal dynamic of those material-
discursive practices.

If we consider the augments as part of a relational assemblage with Morey and


Tinnell’s book, we must now acknowledge that these informatic overlays do not simply
convey supplementary content: they are actively and mutually co-shaping our
experience of the printed book. Design spaces of the kind instantiated by HP Reveal
are not inert receptacles for storing informatic content: digital space is an expressive
environment whose relational elements co-compose experience for the user.

Another influential approach that has absorbed remediation strategies can be found
in work emanating from the EPFL+ ECAL Lab in Lausanne, Switzerland, where
engineer-designers Nicholas Henchoz, Vincent Lepitit, Pascal Fua and Julien Pilet
combined new advances they had made in computer vision with the idea that AR
needed to embrace communicative simplicity and ease of use in order to become
creative. Developed through scientific research, demonstrations and specialised
conferences since 1992, their chief contention was that if AR were to shift beyond its
status as simply a ‘technology’ to take on the status of ‘medium’, it would need to
communicate as a ‘dedicated visual language’ containing attributes of grammar,
syntax and the potential for developed narratives (Henchoz and Lepetit 2011:85).
Elements from the real world would be doubled to instantiate a consistent semiotic
flow between physical and digital. For example, Camille Scherrer, a member of their
laboratory, developed the artwork Le Monde des Montagnes (2008), an installation
that used the older medium of paper cut-outs as a trope to segue between a physical
book and its digital augments.

While the theoretical approaches outlined by Henchoz and Lepetit (2011) and explored
in the practical research output of the EPFL+ ECAL Lab are less sequential than the
historical rollout of remediation (as it appears for example in Bolter and Grusin’s early
discussions), they do express some shared concerns. For example, the idea in
remediation that achieving useability by leveraging shared and familiar medial
elements will assist in drawing an audience, resonates with Henchoz and Lepetit’s
principle of creating visual consistency between real and virtual worlds. Furthermore,
29

the models of interface proposed by Bolter and Gromala (2003), where interface is
either window or mirror, also holds in EPFL+ ECAL research, in regard to the notion
of doubling the real world into the virtual to simulate a consistent semiotic flow.
EPFL+ ECAL produced the influential AR touring showcase, Gimme More (Eyebeam,
New York 2013).20 There, Henchoz stated:

Augmented reality allows everyday objects to tell their stories, reveal


information and interact with users in real time. What transpires as a result is
a radical shift of interdependence between the object and the information it
conveys.21

We can interpret what Henchoz describes as the ‘interdependence between the object
and the information’ conveyed, through the idea that augmented content is fused to
actual objects in a logical semiotic connection. For example, TattooAR (2013), an
artwork by Cem Sever from the Gimme More exhibition, placed augmented tattoo
designs on the bodies of participants. Through an altered image stream – that showed
a visitor’s body with a tattoo augment – participants were afforded a view of their
tattooed ‘self’, using a large screen as a mirror metaphor.22 Parallels exist with
Snapchat’s use of the augmented mirror technique (discussed earlier in this chapter),
to insert the body to a fixed framing. Since this artwork is at human scale, rather than
on the small smartphone screens of the Snapchat app, we might also consider what
sense of digital embodiment it might convey for the participant. If we do this, however,
we would also need to inquire as to the quality and degree of the embodied experience
that is generated. Since this re-presentation of corporeality is pre-programmed:
applied to multiple bodies of whomever encounters the experience, there is limited
scope for the participant to shift the sense of embodiment they perceived in the magic
mirror. Whilst they might express wonder at their body’s new ink, this is the limit to
the experience itself.

In the examples and case studies described above, there is a strong tendency for
augmented content to appear as pre-programmed, called up from a server. It does not
modulate or shift with the environment or with the participant in the experience. In
the section following, we shall work through a third major approach to AR/MR, by
looking at a selection of experimental artworks. Emanating from artistic culture and
theoretical concerns, augments are less restricted here by the need to convey
30

structured information. Digital overlays are articulated more for what they can do
rather than any metaphorical content such as ‘presence’ they might carry. While at
times utilising presence metaphors to connect digital and physical, AR/MR by artists
radically skews the informatic overlay approaches taken by the engineering and design
examples discussed above. Augments are not there to facilitate literal or mimetic
connections with an object or a body. Rather they are situated as critical and playful
forces that begin to work with and open-up questions of embodied and performative
practice.

Approach 3: AR/MR through media art practice and scholarship

What is lacking in the approaches outlined above is an understanding of how MR can


be utilised in a way that extends beyond its technical encoding as information
supplementing a physical space. While the remediation and semiotic approaches
mentioned above are still dominant in mainstream AR/MR, new understandings have
entered the field. The approaches described in this section, are neither engineering-
based nor confined by earlier media concepts. Artists involved with AR/MR have
manifested different techniques for augmenting space, many of which pre-date the
current thinking in computer science and engineering such as that encapsulated by the
RV Continuum and other similarly restrictive models. We will be examining artistic
interventions that complicate both the informatic overlay approach and the notion of
a ‘seamless’ connection between ‘real’ and ‘virtual’ in MR spaces. The experimental
artworks surveyed in this section are sympathetic to what I will later analyse as a
materialist approach to MR that focuses on relations, intra-action and the agential
reality of all elements of the software assemblage. It is not intended to be a
comprehensive selection of artwork in the field: rather, the artworks mentioned here
reveal an interest in processes that encourage self-organisation, emergent relational
forces, iterative re-assembly, and an aesthetically expanded role where audience
members becomes participants/artists.

Inquisitive writers/practitioners from the avant-garde of media art practice have


drawn attention to the need for an alternative formulation of MR. In an analysis of the
collective Blast Theory’s augmented and mixed reality artwork, Steve Benford and
Gabriella Giannachi note that Milgram and Kishino’s RV Continuum might be more
useful if it was ‘more rhizomatic’, since the classification system tends to place physical
31

and virtual in opposition to one another rather than fostering a more relational system
(Benford and Giannachi 2011:3). They describe the RV Continuum as a ‘largely
mathematical and technology-centric’ method of ‘constructing virtual spaces’ in order
to align them with physical space (2011:43). In response, they offer the notion of
‘trajectories’, where the participant enacting an artwork moves experientially through
real and virtual online worlds that are partially pre-scripted, and partially self-
generated. The pioneering MR participatory performances Uncle Roy All Around
(2003)23, and FlyPad (2009)24 by Blast Theory weave together theatrical performance,
online and real-world environments, audience participation, role-playing, and data
extraction in complex arrangements that unfold mutually across the digital as well as
the physical (Benford and Giannachi 2011). Carving out a new genre of MR
performance, Blast Theory’s contribution to a performative MR, advances a less
digitally privileged mix of realities, where participants are given cues by the artworks
that send them off on exploratory trajectories that pass through ‘hybrid space’.
Referencing Gilles Deleuze’s notion of the ‘fold’ they describe hybrid space as:

... composed of different, adjacent, “enfolding” spaces, simultaneously


occupying different points on the mixed reality continuum, which remain,
however, in a heterogenous, discontinuous, unsynthesized, and changing
relationship with one another (Benford and Giannachi 2011:45).

For example, in FlyPad (2009), a camera placed over a gallery’s atrium framed a wide
orthographic view of the space below. In this frame – a ‘flying area’ – visitors were able
to take on the identity of a winged avatar (a brightly coloured insect), whose movement
they controlled using a footpad. Flying across and through the atrium, the flyers could
join together, performing new movements by melding their avatars, or remain
separate but fly with less vigour. Prior to playing the game, as they walked around the
gallery, data was extracted from participant’s movement by way of a Radio Frequency
Identification Device (RFID) tag, and this data was added to their digital avatar in the
FlyPad game. Incorporating a participant generated trajectory with pre-figured
elements, produced an iterative artwork that could never unfold the same way twice
(2011:138-141). While Blast Theory’s work does not directly relate to my practice-
based approach since it relates more to the potential of MR as a transmedia
storytelling medium, it illuminates the need for more performative versions of MR,
influenced by relational assemblages rather than technocentric formulations.
32

Following the thread of dissent toward technology driven methods that occlude the
body and align corporeality with a pre-figured data system, leads us to artist generated
approaches that use different techniques to combine corporeality with augmented
digital worlds. In John McCormick and Adam Nash’s Reproduction – an artificially
evolving performative digital ecology (2011), autonomous agents helped to generate
embodied relations that re-sited image/colour data from human interactors to an
adapting digital ‘life-form’. Extracted using motion capture technology, the results of
the results of these hybrid reproductions – as they adapt in real time in response to
feedback – were presented as a full scale projection in an immersive room. The impact
of this mutation on the visitor is discussed as provoking a multi-sensory state of
‘contemplative interaction’ (Riley and Innocent 2014; Riley and Nash 2014), and it is
noted that this understanding differs from conventions that frequently situate MR as
either ‘reactive or distracting’ (Riley and Nash 2014:260). Contemplative interaction
offers itself as a method that examines ‘notions of affect that relate bodies, locations,
spaces and codes across the physical and virtual’ (Riley and Nash 2014:263). Again, a
much more complex figuration than offered by the RV Continuum, and one that allies
itself with Deleuzian notions of affect and embodiment (Riley and Nash 2014:261).

Sandbox: Relational Architecture 17 (2010)25 by Rafael Lozano-Hemmer, used


augmented projections to form spontaneous visual connections between people at
Santa Monica Beach. An area of 740 square metres of the beach had been prepared in
advance with a tracking system, surveillance cameras, and projectors that caste real
time images of giant hands across its surface. These giant hands were magnifications
of human hands playing in one of two 69 x 93 cm sandboxes nearby. At the same time,
participants on the beach were captured by the same tracking/surveillance/projection
system, with their full body images shrunk to a tiny scale. Re-projected in to the
sandbox, people there were able to play with human miniatures using their hands.26
Choices emerged: participants could join the sandbox and see their own hands
projected as giants; or, they could remain in the expansive beach space and have their
own full body images recorded, and shrunk, then projected back to the sandbox for
hands there to explore.

As physical bodies passed by one another – in the illuminated darkness of the tactile
and tractional sand – they passed over and against digital augments as light
projections of the bodies of others. This relay of projections formed a recursive loop
33

where new relations of overlapping bodies spontaneously and temporarily emerge, in


oscillation between media environment and natural environment, corporeal bodies
and projected bodies. Light gave these augmented bodies presence, through the
relations it formed with other surfaces: the augmented sandbox, the participant’s skin,
the luminous projection system, the tracking and surveillance software.

This is not Lozano-Hemmer’s first installation leveraging AR technology in a complex


relational system. Indeed, it is clear that Lozano-Hemmer was a pioneer in the creative
use of augmented material in media art. In Underscan (2007)27 as well as Body Movies
(2002)28, an earlier version of the same tracking system used in Sandbox was deployed
to track participants and activate augmented video that passed under their shadows
as they walked. Ulrik Ekman has analysed emergence and embodiment in relation to
Underscan:

This happens via the virtualization of their bodies, but also via the emergence
of a complementary interplay, in their embodiments and the real environment,
among locative media, signaletic telepresence, and ubiquitous computing
(2012:18).

Ekman notices a blurring of distinctions, such as between public and private space via
the virtualization of bodies. Like Sandbox, the technical operation of Underscan
involved luring people into a pre-prepared space embedded with sensors and
surveillance equipment, where their movements would be tracked and used to trigger
corresponding movement image sequences. In the case of Underscan, the projected
sequences were from pre-recorded films, deployed on the pavement as augmented
overlays. In the case of Sandbox, the projections on the sand unfold in real time ,
reflecting advances in tracking technology made in the seven years between these
works. Looking over to the sandbox from the sand, participants could see others
playing with their images, and they could do the same with the large-scale hands
projected on them.

Sandbox’s project description lays the power relations and affective conjunctions of
34

bodies (human and non-human) embedded in this artwork bare:

The project uses ominous infrared surveillance equipment not unlike what
might be found at the US-Mexico border to track illegal immigrants, or at a
shopping mall to track teenagers. These images are amplified by digital
cinema projectors which create an animated topology over the beach,
making tangible the power asymmetry inherent in technologies of
amplification.29

This ‘animated topology’ approached power relations as matters of scale: where giant
hands manipulated tiny people, yet the tiny people were also able to overturn that
relation and become the giant hands. Spaces are alluring and carefully composed,
drawing humans toward their dynamism – such as movements of light and data –
latent with affective potential. Manuel DeLanda (2007) suggests that human
experience in Lozano-Hemmer’s artworks, is largely produced through ‘expressive
spaces’ activated by underlying nonhuman energies. In such spaces, code is not simply
executable but is affective, ‘long series of ones and zeros that ultimately embody the
software animating the hardware’ (DeLanda, 2007:104). Establishing a giant structure
populated with surveillance cameras, a bespoke tracking system, and ultra-high power
projectors, Lozano-Hemmer generated an expressive space for humans to affectively
perform within.

All the artist-led approaches described above convey the idea that the physical and
digital spaces of MR are actually not separate but meet through oscillatory movements
of bodies and data. In such assemblages, the materiality of the digital is not only
affective, but further suggests that senses of embodiment operate between human and
nonhuman. this affords the broad perspective that data and the corporeal might
mutually co-constitute one another. Clearly, such approaches are markedly different
from the engineering and computer science paradigms discussed earlier in this
chapter, that would fix MR as a matter of technical arrangements on a display screen.

Lanfranco Aceti’s and Richard Rinehart’s (2013) edited special edition of Leonardo
Journal, Not Here Not There, was the first comprehensive survey of AR/MR as an
artistic category. Framing a quote from the Manifest.AR collective30, Rinehart
summarizes some of the provocative issues raised by mobile AR:
35

Sited art and intervention art meet in the art of the trespass. What is our current
relationship to the sites we live in? What representational strategies are con-
temporary artists using to engage sites? How are sites politically activated?
(Rinehart 2013:9)

This collection focussed on mobile AR deployed through geo-location, since this was a
popular artistic movement at the time. Soon after Aceti and Rinehart’s collection,
Vladimir Geroimenko’s text Augmented Reality art: from an emerging technology to
a novel creative medium (2014), became the first book to systematically analyse the
artistic threads coming out of AR as a new medium. Contextualising that study,
Geroimenko reproduced in full the Manifest.AR manifesto (Freeman et.al. 2012),
leading him to argue that what differentiates AR from other emergent media forms
such as ‘virtual reality, Web art, video, and physical computing’ is that it is bound up
with an activist politics that re-purposes technologies like mobile phones as radical
artmaking devices (Geroimenko 2013:vii). Moving away from the restrictions imposed
on AR as medium defined by the informatic overlay or by remediating older media,
artists working with mobile AR have forged new critical pathways, such as those
described in Geroimenko’s collection. Mobile AR – popularized by commercial
products such as the smartphone games described earlier – in media art takes geo-
location to activist contexts.

Mobile augmented reality art (MARt), emerged as cultural force in AR from about
2010, with the influential Manifest.AR group founded on January 25th, 2011,
following their ground-breaking guerrilla exhibition/intervention in the Museum of
Modern Art, New York, We AR in MoMA (2010).31 There, organisers Mark Skwarek
and Sander Veenhof conspired to stage an exhibition of augmented art without the
permission of the gallery, and got away with it. Holding tours of their artworks, the
show went under the radar of the gallery authorities, and inaugurated a new
movement in activist and interventionist installation. Subsequently, the group staged
other interventions where geo-location was used to surreptitiously place augments at
canonical and politically loaded sites, such as outside the New York Stock Exchange
during Occupy Wall Street (Skwarek 2014:3), and at the Venice Biennale 2011 and
2013 (Thiel 2014:31). During Occupy Wall Street, the area in front of the New York
Stock Exchange was off limits for protestors, yet the Manifest.AR group were able to
stage a ‘flash mob’ waving smartphones rather than obvious signage (Skwarek
36

2014:17). While augments still operate as informatic overlays, they are skewed to
critical ends by an activist culture. A deeper knowledge of how information operates
via layering in AR, also featured in some of this work. For example, by hosting their
work on private ‘layers’ in the app Layar32, Manifest.AR avoided the marketing noise
made by commercial mediatic assemblages. Such strategic uses of augments in
conjunction with mobile devices, not only encouraged a critical turn in thinking about
the informatic overlay, but also implied a more extensive concept of embodiment.

Moving around a site to discover augments on a smartphone involves a trajectory


through physical space that engages meanings that are both pre-existing and
inscribed. Space is not inert waiting to be written on by the artist/activist: space can
be considered as expressive, as will be examined shortly (DeLanda 2007). Re-thought
as affective toward its human inhabitants, space can be conceived as a force of the
nonhuman that makes connections with embodied actions, generating affects that
adapt behaviours and practices. Artworks using mobile AR that likewise imply space
might be expressive, and that work alongside the embodied actions of participants to
co-compose the experience, are Tamiko Thiel and Will Pappenheimer’s Biomer
Skelters (Liverpool 2013 and various iterations) 33, and Janet Cardiff and George Bures
Miller’s the City of Forking Paths (Sydney, 2014-2017).34

In Tamiko Thiel and Will Pappenheimer’s Biomer Skelters (Liverpool 2013 and
various iterations), a participant walks around a pre-determined area of a city with
their smartphone. As they move the camera sensor, they see an array of digital plants
appear on their phone screen. The participant holds a Zephyr heart rate monitor to
connect a bespoke smartphone app to their heartbeat.35 The frequency of the signal
generated by the beating rhythm of their heart is converted into the augments – virtual
plants – that populate the ‘biome’, a term borrowed from biology that describes a
community of plants and animals living together in a ‘congruous ecology’(Woodward
2008:2). In this case, the biome is an urban landscape populated by data, plants and
bodies.36

The concept of combining an art-game form, an affective computing network, together


with algorithmic botany produced through AR, marks a physiological turn toward
embodied action in mobile AR. Operating as a self-organising system tethered to the
physical activity of walking – hence conjoining physical world with digital via
37

enactment – the participant of this art-game becomes a vital part in speculatively


generating a ‘natural’ rejuvenation of the city. As the game never unfolds the same way
twice, each experience is highly differentiated and multiple meanings layer on top of
one another at the same geographical sites. During this active movement across an
urban landscape, tangible changes are made by the participant, and each player is
involved in a lively botanical re-inscription of the city (Wright 2018a). Members of the
public are accorded a meaningful role as ecological change makers in their own
community. Beyond the game, Thiel and Pappenheimer’s real time re-assembly of a
digital biome across the topology of urban physical space perhaps contributes to a shift
in thinking urban design, introducing new design possibilities for a somewhat
homogenous urban ecology of the contemporary city.

In opposition to the more mainstream uses of augmentation (such as those noted


earlier in this chapter), Lev Manovich has cited artist Janet Cardiff’s audio walks
(dating back to 2005) as an exemplar of the poetic deployment of ubiquitous
technology:

Their power lies in the interactions between the two spaces – between vision
and hearing (what users are seeing and hearing), and between present and
past... (Manovich 2006:226).

By directing the participant toward conflicting perceptual zones, Cardiff’s work shifts
conventional preconceptions about that space, transforming relations between
participant and site. Janet Cardiff and George Bures Miller’s the City of Forking Paths
(Sydney, 2014-2017) places the participant in a situation where they must follow the
audio-visual logic of an AR embedded video, along the exact cartography set out by the
narrative. Participants play a video on their smartphone and follow along with the
artists’ shamanic audio-visual narrative as it meanders through The Rocks district in
Sydney. Required to trace multiple narrative flows at the same time, the participant
must attune to the work as it unfolds: the video stream playing constantly on the
phone’s screen; the binaurally recorded narrative that played through headphones;
and, the parallel ‘reality’ of the street experience during the walk (Wright 2018b).
These nuances are less narrative than embodied: they must pause and sit, walk, take
multiple turns, follow a tunnel under a street, and so forth. Confronted with at times
startling imagery on-screen — a phalanx of office workers dimly lit with mobile
38

phones, a gagged man with duct tape over his mouth wearing a straight-jacket in
Miller’s Point — the participant must perceptually negotiate a parallel flow of
experience between physical and digital. If they deviate from the ‘forking paths,’ they
lose their place; for example, by taking a turn down the wrong street, they are cut adrift
from the artwork, and the co-emergent experience of physical and digital experience
is broken. In this way, the work operates alongside each person’s sensory
apprehensions and habits, foregrounding the role of the body in producing a MR
experience rather than being framed by technical constraints such as screen,
transmission and resolution.

In these mobile AR artworks the movements of and between participant and


smartphone work together to extend the augmented space into geographical space.
Highlighting the embedded physical entanglement between mobile device and user,
physical and digital mobilities extend the augmented experience toward a more
complex and heterogenous material assemblage that does not exclusively reside in
screen space. Art experiences that interpolate the performer or participant in
extensive (outside the frame) and intensive (sensorial) compositional modes, explore
AR from the standpoint of an embodied intra-action engaging participant, augments
and ecology. This triumvirate will be explored more closely in chapter 3 during a
discussion of my mobile MR software assemblage, the Wild Versions (2017).

Materialist approaches to the interface

As I have suggested above, there are many artistic deployments of MR that create more
complex relations between digital and physical spaces than is found in commercial and
industrial arenas. Hence, we require some concepts drawn from what I will broadly
call materialist approaches to computation and the interface. A materialist approach
to the interfacing of the physical and digital explores the contingent relations between
all elements (human and machine); how these reconfigure and re-assemble in situ and
across the specific spatiotemporal phenomena that take place while interfacing. Here,
looking ‘beyond buttons’ is a critical practice, of which Anderson and Pold (2011) note:

... investigation of the interface does not stop at the computer’s surface but
goes beyond the buttons and reaches ‘back’ into history, and ‘through’ to the
human senses and perception, ‘down’ into the machine, ‘out’ into society and
39

culture (Anderson and Pold 2011:3).

In my research, consideration will be given to ways that a materialist conception of


interfacing can be inflected to MR through hybrid human-nonhuman data-plant-body
systems, with the aim of producing a performative space of negotiation between
material forces. The first step toward such a performative space requires locating a
more complex conception of interface than that provided by computer science
research, engineering paradigms, or commercial and industrial interaction design
practice.

Alexander Galloway (2013) and Brandon Hookway (2014) have posited non-
instrumentalist accounts of the interface that uncover its neglected social, political and
networked conditions. In the Interface Effect (2013), Galloway explores the interface
not as an object (or a creator of objects) but an ‘effect.’ ‘Interface’ is the structure by
which a centre relates to an edge that it seeks to control in order to extract value
(2013:41). Galloway argues that interfaces – such as those found in gaming and
entertainment industries – are imbricated in systems of value extraction: the game
itself bears the stratified imprint of an ideological structure. For example, in an
analysis of the massive online game World of Warcraft (Blizzard Entertainment
2004), he argues that the commerce and economics of the frequent in-game purchases
one must make in order to ‘battle’ successfully, mirrors everyday life in a neo-liberal
capitalist society (Galloway 2013:42-44).

Hookway is firm that the interface is not a surface, investigating it instead as a


relational assemblage — a conditional system of materialities whose task is to grapple
with control as an industrial problem. In so doing, however, the interface actually
reveals the unpredictable edge of control. Such a mode of (second-order cybernetic)
control is haunted by the entropic dynamic of self-organising systems. Drawing on
nineteenth century thermodynamic theory, Hookway describes the interface as a
series of transitions that occur at the boundaries of a chaotic system, in a zone of
contestation whose operations are procedurally determined by the unpredictability of
their relations with a larger matter-flow (2014:60). The interface can no longer be
simply an object in itself. Hookway’s view stands against those who seek to deliver an
instrumental account of the interface as a ‘fully realizable technology or a soluble
problem with regard to a design methodology’ (16). A shifting set of relations aimed at
40

control of the system, but only ever approximately achieving it, the interface opens the
internal elements of the computational system to external potentials in operations of
emergence that create assemblages (44).

Approaches such as those by Galloway and Hookway, are useful for artistic approaches
to MR, since they apprehend interfaces as more than tools that afford access to a
technological virtual. As analysed earlier in this chapter, approaches to augments as
an informatic overlay, are themselves implicated in systems of value extraction,
product marketing, and work efficiency. Re-positioning the interface from its object-
like status as a surface (or surface of a ‘thing’) to a more open relational assemblage,
acknowledges the complex interrelations that enmesh concrete physical and digital
virtual world systems. Thinking MR as a dynamic interface that intra-actively
negotiates with other assemblage elements across the physical and digital, allows new
kinds of practices that open MR to new and emergent performative materialities.

Materialist thinking through software studies and media art

Before we can arrive at an adequate theoretical expression of the affordances and


capacities of MR as an experimental medium, software needs to be articulated not as
a product or object, but as a processual element in a materially entangled system. Such
a system is not purely technical; it also forms powerful connections with affective
forces, through vectors producing relations that elude the purely technical.

Mobilising concepts from corporeal and cultural theory— such as agency, materiality,
and sociality — Adrian MacKenzie has argued that data structures and their elements,
as well as code and coding practices are performative and need to be addressed in their
singular operations and functions, rather than as transmitters of ‘meanings’ or
executors of commands (2006:75). MacKenzie does not explore code as an object (with
fixed form and bounded parameters), but as a nexus that sets in motion a series of
interconnecting affects and relationships across technical and social networks:

Code is a multivalent index of the relations running among different classes of


entity: originators, prototypes and recipients. These classes might include
people, situations, organisations, places, devices, habits and practices. In code
and coding, relations are assembled, dismantled, bundled and dispersed within
and across contexts. Such relations are inextricable from agential effects, from
41

asymmetry between doing and being done to (2006:170).

MacKenzie’s analysis reveals software, algorithms and data structures as not only
technical (and stable) elements but as highly provisional, individuated, sociocultural
events, operating through their connections and couplings, across human and
nonhuman networks that make as well as break connections. Code is more a mode of
sociality that forms networks of agency and new possibilities of embodiment with and
for its users.

David Berry (2011) has a useful conception of code as ‘computational logic located
within material devices’ (63), where code produces a series of materialities conjoining
the activities of the end user, the creative writing of the programmer, and the devices
that run executable commands. Together these engender code as a relational system,
which can be deployed in any given cultural milieu, with quite specific affects and
effects. For Berry, when embedded within technical devices, code takes on an agential
role, articulating the nuances of the software medium and linking those nuances to
autonomous agents, applications, and user behaviours. Code is located through its
material relations with both software and corporeality (experienced both individually
and collectively via participation), enabling a conception that extends far beyond a
series of executable commands within a programme. Deployments of assemblages –
whether these be of the interface or code – are significant for this study and afford
diverse material elements the capacity to coalesce according to their own affordances,
intensities, flows and attractions. The assemblage itself then becomes a re-
configurable morphology that actively resists structuring or engineering as a limited
technical operation: its elements continually prehend a desire for shifting,
differentiated re-assembly.

Trajectories that favour the transdisciplinary approach taken by researchers from


materialist and software studies contexts provide a set of conceptual tools for the
software assemblage. As a provisional series of individuated technical-material-
discursive formulations, software assemblages afford an approach to interfacing and
augmentation as processual, material, and relational. Such a perspective reaches
across both physical and screen events, as well as across modalities (of sight, sound
and touch), and pulls out threads from various disciplines and practices such as media
art, computer science, experimental biology, quantum field theory, and gaming. A
42

materialist approach facilitates an understanding of the temporal and spatial relations


used to generate new events emerging at the conjunction of human and technical
assemblages. If we only see AR/MR as an information layer, we miss its capacity to
provoke multimodal perceptions, and we miss the embodiment that is the mainstay of
‘user’ behaviours. We also miss the ways in which the ‘media assemblages’ (Fuller
2005:13) that are AR and MR are always part of broader technosocial shifts; for
example, the re-purposing of the phone as an entire medial space and a space for the
emergence of new social behaviours.

Through critically responding to the RV Continuum and to engineering and computer


science approaches that have found their way from AR through to MR, I have
questioned the need for such a taxonomy by looking elsewhere, such as to media art
practices. Tropes, metaphors and technical elements such as ‘presence’, high
resolution display, and simulation have been used by many interaction designers as a
checklist of how to compose AR/MR/VR experiences. Examining the informatic
overlay as it appeared through engineering, industrial and commercial examples,
allowed us to understand the informatic role of digital augments in these contexts.
However, as I have argued, the application of categories, taxonomies or criteria to
delineate and create inclusions or exclusions, would restrict MR discourse and practice
in media art. Such an approach would offer limited chances for new senses of
embodiment, restrict a participant’s sense of agency in digital space, and overly
program the shape of the MR experience for participants/performers. The informatic
overlay approach was traced through diverse examples and revealed as a specific
design approach where augments convey informatic content, shaped by Milgram and
Kishino’s taxonomy in co-operation with other programmatic mechanisms. The
problems caused by a taxonomic understanding of MR were not solely concerned with
the informatic overlay. They also related to the hardware devices that materialized
digital augments – such as the HUD and the smartphone – as well as the design
practices applied to the deployment of augmented material.

Closer to the interests of this study are MR experiences as bespoke relational


arrangements or assemblages, manifest in the work of the artists discussed here, such
as Blast Theory, Rafael Lozano-Hemmer, McCormick and Nash, Thiel and
Pappenheimer, as well as Cardiff and Miller. As we saw through the analysis of
Sandbox (2010), for example, augments must be given due consideration as
43

nonhuman forces that prehend affect and beckon human bodies toward senses of
embodiment that are spontaneous and emergent with a computational network. This
point will be examined more closely in the next chapter, when I explore augmentation
as relational to intra-action in the software assemblage formulation.

Now that we have identified the crux of the problem with treating digital augments as
an informatic overlay, we can embark on a trajectory that morphs augments into a
performative digital material. Instead of accepting the established interaction
paradigms from engineering and computer science discourse, this research will
develop a critical posthumanist approach where augmented materialities emerge
through processes of intra-action that imbricate my performing body, choreographic
data objects, living plants, custom designed software, and hardware sensors: my aim
will be to assist in the materialization of various mixes of reality, that emerge off
screen, as well as on. Moreover, I will test – through experimental art practice – the
software assemblage as a formulation that might offer the potential to extend MR
beyond the informatic overlay approach and gesture toward the vitality of an affective
world space. Developing techniques of performative interfacing that challenge the
conventional informatic overlay approach, we shall attune our senses to the potential
of emergent modes of embodiment that arise when digital augments become
performative.

1 At different points in this thesis, the term of ‘virtual’ will be approached both in a philosophical
and a technological sense. The following section speaks to its technological sense in computing,
where the virtual is ‘that which is simulated by computer technology’ (from the Oxford English
Dictionary online). The philosophical ‘virtual’, as a concept from Gilles Deleuze, will factor more
in chapter 2.
2 As identified by Milgram and Kishino (1994), there are two technical ‘types’ of MR: Augmented

Reality (AR) and Augmented Virtuality (AV). This thesis is not concerned with AV, since analysing
the augmentation of technologically virtual worlds would require adopting an entirely different set
44

of strategies and methods. My research focusses instead on the mixings of reality and the digital
that speciate from AR and have migrated to MR.
3 While there has been much technological and historical development since 1994, the concept of

the RV Continuum is still widely applied. New taxonomies have been made, accounting for
emergent technical developments. For example, Wang and Dunstan note in their more recent MR
taxonomy for industrial applications: ‘The goal here is to minimally modify and complement,
rather than replace, Milgram’s original MR continuum by considering media representation
features and tracking technology, and discussing the input and output features separately’
(2011:495).
4 Retrieved from https://en.wikipedia.org/wiki/Wikitude (accessed 9 February 2014).
5 See Simon Perry (2008-10-23). “Wikitude: Android App With Augmented Reality: Mind

Blowing”. digital-lifestyles.info. (accessed 9 February 2014).


6 Pokémon GO, while significantly more popular, is not the first example of a massive multiplayer

AR game from Niantic Labs. Much of the mechanics for the game were based on an earlier
application called Ingress (2013), where (again) two factions compete to secure virtual portals and
takeover real territory. https://en.wikipedia.org/wiki/Ingress_(video_game)
7 While AR mode is only available on some phone models, the game is known primarily as an AR

game. https://support.pokemongo.nianticlabs.com/hc/en-us/articles/115015868188-Catching-
Pokémon-in-AR-mode-iOS-only-. For example, Vladimir Geroimenko’s forthcoming edited
collection (Springer 2019) specialises in Pokémon GO as an AR paradigm.
8 Retrieved from https://www.snapchat.com (accessed 2 July 2017).
9 Retrieved from https://lensstudio.snapchat.com (accessed 13 February 2018).
10 Additionally, there have been issues around corporate data collection, and responsibilities

toward data privacy, where it seems that games such as Pokémon GO and Ingress are being utilised
to gather user information through both geolocative data and computer vision. See David Meyer’s
article (20th July 2016), http://fortune.com/2016/07/20/pokemon-go-germany-privacy/as well
as Kate Conger (2016) “ Niantic responds to senate inquiry into Pokémon GO privacy”, retrieved
from https://techcrunch.com/2016/09/01/niantic-responds-to-senate-inquiry-into-pokemon-
go-privacy.
11 Full article here: https://www.artefactgroup.com/articles/mixed-reality-without-rose-colored-

glasses/. Ironically, while claiming to be the opposite, the article is utterly rose-coloured.
12 Retrieved from https://www.magicleap.com (accessed 2 August 2018).
13 Retrieved from http://blog.leapmotion.com/northstar/ (accessed 1 August 2018).
14 The resistance that many potential buyers have toward HUD wearables is manifest in the sharp

decline of the much heralded technology, Google’s Glass. While such issues are outside the scope
of this research, the examples described underscore the point that the informatic overlay should
not be understood in a pure technical sense: wearables are culturally and socially entangled,
implicated to practices that extend beyond an informatic context. The commercial mediatic
assemblages of Silicon Valley, driven by neo-liberal profit, frequently fail to interrogate the
questionable practices that such wearables generate, occluding the relevance of the unexpected
behaviours that co-emerge with these apparatuses. For example, hoping the tide of dissent for
Glass may have ebbed, Google is relaunching the product, this time with AI capacity.
https://www.wired.com/story/google-glass-is-backnow-with-artificial-intelligence/ (accessed 20
September 2018).
15 They note: ‘What is new about new media comes from the particular ways in which they refashion

older media and the ways in which older media refashion themselves to answer the challenges of
new media’ (Bolter and Grusin 1999:15).
16 Bolter and Gromala attempt to distinguish between the window and the mirror, where

VR is seen as following the Renaissance paradigm of the ‘window,’ attempting to provide the viewer
with a seamless perspectival experience, and AR is ‘reflective’, in their text, a more contemporary
metaphor (128). However, as AR/MR practice has played out, actually it is both window and mirror
are tropes that structure the informatic overlay. This is especially apparent when analyzing AR on
smartphones and tablets where the screen itself is a pervasive window.
17 Unfortunately, while the book is published in 2017, all of the material from the experimental art

chapters – the area of interest to my study – is created before 2012, and additionally repeats
artwork discussions contained in Vladimir Geroimenko’s (2014) text Augmented Reality Art. For
example, EGG AR: Things We Have Lost, by John Craig Freeman, a range of AR interventions by
45

Tamiko Thiel, Conor McGarrigle’s NAMAland, and Mark Skwarek’s #arOCCUPYWALLSTREET


(2011) all appear in the earlier text.
18 For example, some codes triggered interviews with software developers such as Jay Wright from

Vuforia and Jay Bolter discussing Argon, while others triggered artwork documentation such as
BC Bierman’s Miami Wynwood Walls AR graffiti mural project (Morey and Tinnell 2017).
19 Aurasma was a proprietary application for smartphone and tablet used to create augments

(known as ‘auras’) and attach these using image markers to physical objects.
https://www.aurasma.com.
20 Presented at the Eyebeam Art and Technology Center in New York (February 21-March 2, 2013)

Gimme More: Is Augmented Reality the Next Medium? was directed by Nicolas Henchoz.
21 Retrieved from https://www.eyebeam.org/events/gimme-more-is-augmented-reality-the-next-

medium/(accessed 11 June 2014).


22 Documentation retrieved from https://vimeo.com/60286448 (accessed 11 June 2014).
23 See project description, retrieved from https://www.blasttheory.co.uk/projects/uncle-roy-all-

around-you/. Mixed media artwork performed in various locations in London, U.K. Premiered at
the Institute of Contemporary Arts in London in June 2003.
24 See project description retrieved from https://www.blasttheory.co.uk/projects/flypad/. Site

specific AR artwork designed for the Public Gallery, West Bromich, England, 2009.
25 Created for Glow, Santa Monica Beach, Santa Monica, United States, 2010.
26 Technical information from project description retrieved from http://www.lozano-

hemmer.com/sandbox.php (accessed 18 May 2016).


27 UnderScan was funded by the East Midlands Development Agency and presented in 2007.
28 Relational Architecture 6: Body movies, was presented at the Ars Electronica Festival at the OK

Centrum (Linz, Austria) in 2002 (accessed 18 May 2016).


29 Retrieved from http://www.lozano-hemmer.com/sandbox.php (accessed 18 May 2016).
30 Manifest.AR’s official site: https://manifestarblog.wordpress.com. Founding members are:

Mark Skwarek, Sander Veenhof, Tamiko Thiel, Will Pappenheimer, John Craig Freeman,
Christopher Manzione, Geoffrey Alan Rhodes, and John Cleater.
31 Veenhof and Skwarek invited artists Patrick Lichty, John Craig Freeman, Tamiko Thiel, Will

Pappenheimer, and Christopher Manzione to create augments that were then geo-located in
MoMA using the application Layar. Augments remain live in the museum. Documentation and
project link here. http://www.markskwarek.com/We_AR_in_MoMA.html
32 See https://www.layar.com. (accessed 21 April 2013).
33 Thiel. Tamiko, & Will Pappenheimer. 2013- ongoing. This artwork was first staged

at FACT Gallery (Liverpool) then subsequent major iterations at ISEA2014 (Dubai) and
Virtuale Festival (Switzerland). http://www.biomerskelters.com/(accessed 18 April 2014).
34 Cardiff, Janet & George Bures Miller. 2014. the City of Forking Paths, Augmented Reality app

available in various locations, The Rocks, Sydney, Australia. Retrieved from


https://itunes.apple.com/us/app/the-city-of-forking-paths/id870332593?mt=8 (accessed 16
April 2014).
35 The shifting pace of the heartbeat effects the rate these plants propagate; the goal being to

achieve a relaxed heartrate, which then triggers the propagating process. Competing with another
team to propagate the most plants, physiologically-driven data etches pathways into the city across
a physiologically-driven data network.
36 Affective computing has long experimented with limited aspects of embodiment: Thiel and

Pappenheimer received their technical guidance from scientists researching affective computation
at John Moores University. The collaboration between John Moores’ and Thiel and Pappenheimer
was organised by FACT Gallery, Liverpool. https://www.ljmu.ac.uk/research/impact-
achievements/bio-sensing-meets-art. (accessed 16 April 2015).
46

CHAPTER 2

Augments, apparatus, and intra-action.

In this chapter, techniques for performative interfacing will be explored, with


reference to Tactile Light (2016) and Tactile Sound (2017), two software assemblages
that utilise living plants as a projection screen and reactive surface. I will be
investigating various techniques for an altered MR, such as: performative gestures that
create different relations with hardware devices such as the Leap Motion interface;
augmented environments that combine media technology with living plants,
sometimes individually, sometimes systemically; and techniques of iterative re-
assembly that shift the relations between corporeality, data, and plant systems. Here,
I will use the software assemblage as a practice-based technique to examine some of
the ways that we might explore co-emergence between the digital, the physical, and
the organic, as ‘realities’ of processual making. In Tactile Sound, I will investigate
augments as they co-emerge in tandem with my hand gestures, and with sound
generated from piezo microphones embedded in a specially grown sheet of the first
shoots of Triticum, or common wheat plant, otherwise known as wheatgrass.1 In
Tactile Light, I will be fielding the same growing method and plant matter, but as a
large-scale screen whose surface diffracts the light from projected augments.
Emerging through a digital system that has been configured to disrupt the
conventional rules of a consistent and realistic perspectival space, augments will
disrespect the conventions and design of the informatic overlay, assisted by the
wheatgrass as a diffractive materiality.

While the placement of a digital overlay in screen space might technically define MR
from an engineering or computer science point of view, the technical capacity of a
medium need not define the practices developed by other fields, such as media art.
Technical devices can always be utilised in multiple ways, and methods can be
explored that shift the intended uses of hardware and software in directions that
extend or alter the original design. The particular versions of MR produced by this
47

software assemblage research, hope to reveal a performative side to everyday


commercial/consumer interfaces for data augmentation, such as the Leap Motion. I
develop techniques for activating the gestures captured by the Leap Motion interface
as a performative device that will open up a space for intra-action with human and
nonhuman participants. In this chapter, I examine the potential for new modes of
performative interfacing using the Leap Motion gestural controller as a hand-held
device, rather than leaving it statically mounted on a desktop, as is its intended design.
Re-configuring the Leap Motion as hand-held materially connects with the
explorations of mobility of AR as artistic practice, explored by media artists in the
previous chapter, such as Thiel and Pappenheimer, as well as Cardiff and Miller. In
chapter 3, again following these threads, mobility will be further extended, as I
incorporate the Leap Motion to a homemade mobile MR system to capture a live
performance in a remote geographical location.

My techniques for performative interfacing will be contextualised through reference


to artworks by Golan Levin and collaborators, OpenEndedGroup, and Yvonne Rainer.
The Augmented Hand Series (Levin, Sugrue, MacDonald 2014)2 allows participants
to experience the mutation and subsequent adaptation of their physical hand in real
time; Loops (Kaiser, Eshkar, Cunningham, Downie, 2001)3 takes choreographed hand
movements by legendary artist Merce Cunningham and adapts those using artificially
intelligent algorithms and strategies of recursion; and, Yvonne Rainer’s 8mm black
and white film Hand Movie (1966)4 performs a choreographically radical set of hand
micro-gestures that challenge performance conventions in dance. Referencing the
radical choreographic techniques of Cunningham and Rainer, as well as the sense of a
mutated yet embodied hand articulated by Levin et.al, these case studies will resonate
with my performative approach to interfacing with digital augments. Developing a
media art practice attuned to the different materialities of all participating elements
in the software assemblage, pays attention to the ways material elements interface as
they performatively unfold, inside and outside of screen space.

I will be examining the shifting relations that entangle the key material elements in
the software assemblages in this chapter, via Barad’s concepts of intra-action and
apparatus. This will also challenge the dominance of largely visual methods for
apprehending augmented content, outlined in the previous chapter. Brian Massumi’s
notion of semblance – in which the visual itself is understood as a modality already
48

supplemented by movement – will provide an entry point for exploring new


approaches to augmented movement in MR. Before we reach the software
assemblages in this practice-based research, I will need to clarify the critical difference
between the technological virtual and the Deleuzo-Guattarian virtual, where the
former is a concept from computer science discourse, and the latter is a crucial aspect
of the machine assemblage.

Interrogating 'real' and 'virtual'

As established in the previous chapter, artistic approaches to MR diverge from the


commercial and industrial understandings of the same medium. In my research, I
have preferred the terms ‘physical’ and ‘digital’ to the more often used ‘real’ and
‘virtual’, since the latter concepts have precise meanings in philosophy as well as in
media art theory. Throughout less experimental approaches to MR, the terms ‘real’
and ‘virtual’ are unproblematically deployed. There, in texts such as that by Milgram
and Kishino (1994) analysed earlier, the concepts are essentially spatial
differentiations: ‘real’ describes the space that humans physically occupy, and ‘virtual’
the space of nonhuman exchanges between pixels, code and signal. By extension,
displays such as screens, are instruments for delivering ‘virtual’ space, while ‘reality’
is the domain of the human. For Milgram and Kishino, the only problems to be
resolved in virtual space concern the quality/resolution of data objects and the viewing
mechanisms through which these are apprehended (Milgram and Kishino 1994:1321).
Intermingling of data objects and real objects occur in digital screen space, not the
physical world (1994:1232), unless the virtual aspects of the MR experience were to be
immersive (for example, environments such as a CAVE system). Such a spatial division
does not acknowledge that phenomena in MR are co-emergent; that they constantly
arise ‘in between’ real/physical and virtual/digital. While in the computer science
/engineering/commercial/industrial research axis, ‘virtual’ means a space that is not
‘real’, in media art theory and practice it has a more complex interpretation.

Drawing on Deleuze and Guattari, the virtual is a kind of conditioning field (a ‘plane
of consistency’) that ‘opens a rhizomatic realm of possibility effecting the
potentialization of the possible, as opposed to arborescent possibility, which marks a
closure, an impotence’ (1987:190). It cannot be physically accessed as such, although
the virtual is also immanent to any actuality.5 Actualisations of the virtual emerge
49

through specific conditions in a given moment, when exclusions or inclusions to the


machinic assemblage are selected by an abstract machine (and its relations to other
social, cultural, aesthetic machines) from a range of potential vectors:

Machinic assemblages are simultaneously located at the intersection of the


contents and expression on each stratum, and at the intersection of all of the
strata with the plane of consistency. They rotate in all directions, like beacons
(Deleuze and Guattari 1987:73).

Entangled, the potentials drawn out of strata by the machinic assemblage coalesce on
the plane of consistency, where they generate affects between modes of matter in
movement. Andrew Murphie undertakes a reading of the virtual from a Deleuzo-
Guattarian perspective that questions the conventional positioning of VR as a zone for
the enactment of mimesis and digital simulation (Murphie 2002). He argues that VR
technologies are accompanied by a shift from an interest in representational spaces to
questions of operation. Murphie argues that if VR is taken as an opportunity to
generate ‘co-extensive’ connections between the virtual and human perception
(Murphie 2002:8), it has the capacity to offer relations with nonhuman forces and
intensities that might otherwise be imperceptible.

In contrast to relational assemblages, whose operations might elude mimesis, the


separation of ‘real’ and virtual’ space in Milgram and Kishino’s account, marks the
epistemological separation of observer from object. As well, that same separation
prehends a mimetic rather than a morphogenetic outcome for augments and other
material elements in MR. Digitally generated spaces – such as screens – must not be
considered as sites of mathematical abstraction, or containers for a ‘virtual’ that either
simulates or supplements a ‘real’, but for the relational dimensions they engage as they
intra-act with a range of expressive materials (data, bodies, and so forth). The display
screens privileged by the RV Continuum, are simply one element in my software
assemblage formulation. Screens are surfaces that data passes through, as it circulates
in an ecology of relations and modulations. A materialist approach that incorporates
techniques of performative interfacing, suggests the need to explore the contingent
relations between all elements (human and non-human) as they emerge, reconfigure
and assemble again. Through the iterative variations generated by the software
assemblages in this research, we shall examine these contingent relations more
50

closely.

Reinforcing the conception that contingent relations are composed of a multitude of


entangled and emergent elements, I will not be talking about the onscreen emergence
of data as representations of the real. Instead, I will be discussing the notion of
phenomena (Barad 2007:33), to acknowledge the inseparability of intra-acting forces
that co-constitute one another through the recursive movements of matter and
materials, manifest onscreen as well as off. Barad explains:

According to the framework of agential realism, phenomena are the ontological


inseparability of intra-acting agencies. Importantly, I argue that phenomena
are not the mere result of laboratory exercises engineered by human subjects
but differential patterns of mattering (“diffraction patterns”) produced
through complex agential intra-actions of multiple material-discursive
practices or apparatuses of bodily production, where apparatuses are not mere
observing instruments but boundary-drawing practices – specific material
(re)configurations of the world–which come to matter (Barad 2007:206).

Instead of focussing on representational abstractions – such as of the body or other


‘real’ objects on the display screen – this analysis will pay the utmost consideration to
the ‘intra-acting agencies’ of phenomena, and the ‘diffraction patterns’ this material
movement makes. Through the software assemblages that extend from such critical
posthumanist considerations, I will draw out the multi-sited entanglements between
physical and digital, highlighting important operations – such as performative
interfacing – that re-situate devices like the Leap Motion as apparatuses with
‘boundary-drawing practices’. In my software assemblages, these boundary-drawing
practices will not only trace the outline of data or matter in movement, as in the
conventional view of an apparatus: through intra-actions with a performer/participant
they will co-compose the MR experience.

Agential realist thinking will be applied in two major ways in my research: to a


conception of matter working through bodies, as in the collisions between hand micro-
gestures, living plants and data system; and, matter as an element moving through an
apparatus, to which it enacts changes along the way, and through which it is changed.
For example, to the latter point, computational tracking – of the kind enacted by the
51

Leap Motion –anchors itself to the physical hand in an apparently seamless flow.
However, this can be ‘unjoined’ and made to slip out of synchronicity through
disruptive gestural techniques, and by exploiting technical idiosyncracies such as
‘signal inertia’ (discussed in chapter 3). Causing the hardware interface to track
differently, through my performative and choreographic interventions, generates a
new relational diagram that breaks open the original interface design: this not only
shifts the materialisations generated, but also shifts the intended interface design by
manifesting a mutant use case.

In the previous chapter it was argued that the materialist approach taken by the
software assemblage would allow a more performative version of MR to emerge
outside of the taxonomic structuring of the RV Continuum. To facilitate a re-worked
MR, this research utilises a technique where augments and corporeal gesture generate
a field that is intra-active. Technically, the design operates through the Vuforia AR
extension within the Unity SDK, with the Leap Motion gestural controller used to track
the human hand. This networked configuration de-privileges the frame of the screen,
since the AR camera does not seek objects with which it would surround via a
symmetrical pictorial frame (as in image-based tracking), nor does it seek to use
computer vision to identify a fixed layout of data points such as a QR code or other
physical marker. Instead, digital augments/hand avatars are tracked to the gestures of
a physical hand, whose fluid corporeal movements are transposed to the digital space
of Unity through the infrared sensors of the Leap Motion: the infrared signal will take
on a greater importance in chapters 3 and 4, during the Tactile Signal performances
(2018), and in my final performance piece, Contact Zone (2018). The digital
augments/hand avatars are made visible in the Unity SDK by the Vuforia AR
extension. Later in this chapter, I will bring Barad’s work on apparatuses, diffraction
and material-discursive practices to bear on the software assemblages Tactile Light
and Tactile Sound.6 Before turning to a more detailed exploration of these
instantiations of the software assemblage, I want to bring my research into contact
with an artwork by Golan Levin, Chris Sugrue and Kyle MacDonald, that likewise
suggests how corporeal and augmented hands might function intra-actively, as
relational emergences.
52

Relational emergence in the Augmented Hand Series

Approaching a screen, a gallery visitor places their hand in a small black box.
Confronted with a screen image of a virtual hand that confuses the indexical image of
their physical hand, they must perceptually wrestle with a relational mixing of ‘spaces’
unfolding. Participants are faced with a series of digital permutations that seem to defy
the actual physical state of their hand. The Augmented Hand Series (Levin, Sugrue,
MacDonald 2014) re-constructs, in real time, a participant’s hand, adding or
subtracting, warping and variously skewing the familiar territory of one’s own body.
Once in the box, the hand is tracked by a custom-designed AR system, that uses the
Leap Motion interface to provide gestural data. Real time tracking allows a tight
connection between the original physical and adapted digital hand, with participants
able to articulate all fingers, even when there are six or four in the screen space.

From the responses at the Cinekid Festival where the work was first shown, we can say
that perception has been abruptly challenged, and that the unexpected experiences
provided by the new modelling of hands in digital space problematize the acts of
recognition apprehended by the participants in the experience.7 Conjoining mutated
hand models with the unique hands of various participants, produces a shock to
proprioception, and a desire to unlock the secrets that this strange machine offers to
the usual eye-brain-hand configuration. The challenges to perception posed by this
augmentation places pressure on people’s capacity to perceptually process such
unexpected intra-active becomings as they are occurring, even though after the fact
they are clearly aware of the computationally generated disruption. Through these
astounding intra-actions, new experiences of the body emerge not only in digital space,
but through an adapted perception in physical space as well. With Barad, we might say
that both apparatus and the body (as corporeal and digital) are entangled, intra-
actively co-constituted in the same moment: when a participant places their hand in
the box, they become entangled in a material-discursive practice, that actually re-
draws the boundaries of their own embodiment. The artists are establishing the
conditions for the participants to feel new modes of embodiment, where perceptions
of the body as a discrete and bounded entity are challenged:

Can real time alterations of the hand’s appearance bring about a new perception
of the body as a plastic, variable, unstable medium? … . the Augmented Hand
53

Series can be understood as an instrument for probing or muddling embodied


cognition, using a ‘direct manipulation’ interface and the suspension of
disbelief to further problematize the mind-body problem (Levin, Sugrue,
MacDonald 2014).

The Augmented Hand Series not only articulates a myriad of techniques for
transposing hand gestures into digital space, but further presents a situation where
the participant can experience a dislocation from the habitual perception of their own
body. Emerging in flashes of striking reconfiguration, these moments are not mere
entertainment. Rather, they explore the potential of the body to shape itself differently,
to articulate alongside digital adaptations. Arriving with their ordinary visual and
proprioceptive experiences of their hand, participants leave with an embodied
memory of a digitally adjusted corporeality. The device that allows the Augmented
Hand Series to generate these apparently seamless re-configurations is the Leap
Motion gestural interface. In the next section, I explore how the Leap Motion, when
coupled in different ways to other technical and corporeal aspects and gestures in the
software assemblage, might assist in further exploring MR.

From static to dynamic: Leap Motion as a performative interface

The Leap Motion is a proprietary gestural interface that tracks hand position, three-
dimensionally, in physical space to enable augmented representation of this
movement in AR/MR/VR. A small rectangular hardware device, it houses three heat
sensitive, infrared cameras that capture images from the electromagnetic spectrum, a
capacity that will be explored more closely in chapter 3. As a hand is raised above the
Leap Motion’s infrared sensors, an image stream is captured and relayed to the Leap
Motion software: this signal is then passed to a Software Development Kit (SDK) — in
my case the Unity SDK. Using infrared to trace an outline of the physical hand’s
gestures and obtain movement data, the Leap Motion’s software turns the infrared
signal/image stream into a map of the human hand, which provides the control point
for any subsequent interactions with a screen space.

The Leap Motion began its working life as a desktop interface, and in recent years a
Head Mounted option that attaches the gestural controller to a range of commercially
54

available VR displays has been added.8 In creating different versions of my software


assemblages and working with a range of organic and inorganic materials, this
research has explored the Leap Motion’s capacities using both the desktop device only
and its HMD additions. This chapter investigates the performative potential of the
Leap Motion in desktop mode, while in chapters 3 and 4, head mounted mode is
utilised as well.

In ordinary use, one does not touch the Leap Motion. Whether mounted on the
desktop or an HMD, the device is not intended to be in tactile contact with human
hands. This is because tracking is only possible under certain restricted conditions,
such as placing the interface on a flat surface (Guna et.al. 2014). To enable the Leap
Motion to become part of an experimental software assemblage in which intra-action
was enabled, a change in its relation with the performing body was necessary; a
change, from a computer vision- oriented use achieved primarily through the capture
of infrared electromagnetic signal to a performativity that also included tactility.

In the software assemblages that follow, the Leap Motion interface is handheld rather
than desktop mounted. As hand held, embracing gestures that are atypical – such as
will be shown soon in Tactile Light and Tactile Sound – the device is in contact with
the corporeal body. Hence, it is affected by the modulations and rhythms of gesture,
rather than being only optically-oriented for its intended purpose, to track the hand.
Holding positions shift the data as it is captured, since distance, position and
orientation of the hand are fluid, loose and mobile. Taking inspiration from avant-
garde choreography by Merce Cunningham and Yvonne Rainer, my hands will be
enacting micro-gestural techniques for movement, described in more detail later in
this chapter. These techniques differ enormously from the ninety-degree angle that
ordinarily positions the Leap Motion adjacent to the hand in desktop use. Using this
device as hand held, provides a more extensive range of gestural possibilities that
enhance the effect of the moving corporeal body, involving the interface in a range of
dynamic movements.

The software assemblages to follow in this research speculate upon the notion that
deploying creative techniques for using commercially available devices in alternative
ways, opens a space for more indeterminate corporeal relations to arise. Here,
irregularities and serendipities of gestural movements between bodies and interfaces,
55

challenge preconceived assumptions carried over from HCI approaches to design, that
focus on affording control of a data system to a human user. In media art practice, the
computational alignment of the corporeal body with the micro-temporal accuracy of
machines may not be desirable in all cases. I take this question up further in the
following section by looking at gaming and its interfaces on the PlayStation console.

The Leap Motion’s marketing tagline ‘truly immersive VR begins with your hands’,
sells a perception of desktop VR that fully incorporates the ‘real world’ movement of
hands. When hands are the interface itself — as with a touchscreen or a gestural
interface rather than a peripheral device like a joystick or mouse — HCI research
understands that this results in more natural interaction, since perceptual as well as
motor skills are leveraged to afford control (Hutchins et. al. 1985; Thompson 2015).
The popular idea in HCI is that an interface should fade ‘into the background despite
its material presence’ (Jager and Kim 2008:45). However, if we deny the material
presence of the interface itself, then we are also assuming that the work that the
interface does is similarly ambient. In terms of the Leap Motion, the work this
interface does transposes the physical body to digital space, via a series of
computational operations that aim to mirror the corporeal in the technological virtual.
The assumption is that the body can be re-constituted with accuracy and precision in
digital space-time. Yet, this re-constitution involves a series of mathematically
calculated translations, that omit many of the body’s actual movements and gestures.
It involves a transposition, from the fleshy matter of the corporeal body to the clean
matter of pixels and voxels, which involves at best, an approximation. The extent to
which such approximations shift the data as it is transferred and converted, and how
this process intra-actively alters relations with the original physical gestures that are
materialised as digital information, requires further examination.

With the Leap Motion, the body is aligned with a data system through a touch-less
interface, so that digital matter might be visibly controlled. The wavelike gestures
leveraged by the ‘natural’ hand interaction become the ‘magical’ portal where digital
matter might be encountered as a mysterious ‘virtual’ other. In the industrial design
of 3D motion interfaces like Leap Motion, there seems to be a persistent idea of
controlling the invisible to invoke material effects. To explore the Leap Motion from a
performative and materialist perspective, we need to re-frame the assumptions of data
transmutation to reveal the goals of control. Further, we must acknowledge that the
56

translation of data from the body – such as hand tracking coordinates – are only ever
a partial re-construction of corporeality. Data, of all types will inevitably be elided,
while other information (such as positional tracking or hand orientation coordinates)
is privileged.

Returning to Barad, we might apply her consideration of the apparatus to inform an


approach to the Leap Motion which does not see it as simply an instrument for
measurement:

The apparatus itself is intra-active: ... apparatuses do not simply detect


differences that are already in place; rather they contribute to the production
and reconfiguring of difference. ... Accounting for apparatuses means attending
to specific practices of differentiating and the marks on bodies they produce
(Barad 2007:232).

The ways that the Leap Motion re-constitutes corporeal data as digital information,
involves the use of its infrared sensor to distinguish a hand outline – using
electromagnetic radiation not visible to the human eye. Through calls made to the
Unity SDK, the body’s gestures are matched to numeric values, events which are in
turn mapped to data objects (which appear on screen as digital hand avatars). Numeric
values, chosen from a limited set of variables, are approximated vectors of the physical
hand’s position in digital space: they are not the actual continuity of gestural phases
the body goes through as it moves in a physical environment.

Yet, following Barad, this technical operation is not merely a method for measuring
and converting data. The notion of an apparatus as agential is a cornerstone of Barad’s
philosophy, and a direct challenge to conventions from classical science that place the
scientist at the centre of observation. Her critical posthumanist position gives
credence to nonhuman modes of matter as agential, where material arrangements of
apparatuses have the capacity to alter the materiality of bodies by materialising those
bodies differently. For example, in a discussion of the piezo transducer as a measuring
instrument to determine gender in ultrasonography, Barad shows that this device
enacts boundary-making practices toward the human foetus in utero, as the object of
measurement (Barad 2007:189-191). She argues that an instrument (such as the piezo
transducer), when coupled to an array of techno-scientific practices, produces a
57

material shift that alters meaning with each technical arrangement. The apparatus
occupies a dynamic nexus:

Importantly, apparatuses are not external forces that operate on bodies from
the outside; rather, apparatuses are material-discursive practices that are
inextricable from the bodies that are produced and through which power works
its productive affects. Apparatuses are phenomena, material configurations/re-
configurings, that are produced and re-worked through a dynamic of iterative
intra-activity (Barad 2007:230).

Barad takes a broader view of apparatuses outside of the instrumental approach often
associated with scientific fields. She understands apparatuses as ‘produced and re-
worked’ dynamically. Developing new performative techniques to conjoin the Leap
Motion to my experimental software assemblages, injects a dynamic interplay into
digital augments. As well, it allows for the body – ‘inextricable’ from the apparatus in
the moment of interfacing – to iteratively contribute to further modulations.

Avoiding the logic of control

Before we further explore this approach, we need to inquire more as to the logic of
digital control manifest in devices such as the Leap Motion. This is most apparent in
the field of gaming, where Adrian MacKenzie (2002) has noted that the act of using a
gaming controller aligns the body with a data system:

Rather than converting structures into events, the real time animated computer
game seems to assimilate events to pre-existing structures, to select amongst
the possible events only those that can be processed in terms of translations of
polygons around the screen. Rather than real time play triggering events, the
very systems on which it relies seem to contain events within a strictly
controlled permutation of marks. There would be good grounds to argue that
there is no play here, or at any other ‘playstation’ (2002:159).

In juxtaposing the ludic potential of converting structures to events, against the actual
relation (in the PlayStation system) of assimilating ‘events to pre-existing structures’,
MacKenzie explicates the restricted way in which the corporeal body, treated as a
structure itself, is brought into alignment with the game world. Play is constituted as
58

a ‘strictly controlled permutation of marks’, masquerading as a free and extensive


environment. For MacKenzie, such marks function as demarcations of control that
shepherd the player toward goal-oriented tasks. Material-representational practices
that require the player shift their apprehension and conform to the demands of the
apparatus, function at both hardware and software levels of games such as Grand
Theft Auto V, from the hardware controllers used to manipulate the digital avatars,
(where hand movements are scaled down to accommodate the size and type required
by the buttons and toggles of the controllers), to the highly developed patterns of
game-based artificial intelligence that pervasively re-structure a player’s free
movement via, for example, avatar mobs and self-driving cars.9 A plethora of
programmatic boundaries channel the player through the gaming system, matching
corporeal input with data points.

While the act of aligning the body with hardware devices is a feature of all
commercially available controllers, including the Leap Motion, my research argues for
the need to creatively shift the body’s relation with controllers beyond such an
alignment. Artworks of the kind this research generates and those which provide
contextual dynamic examples of experimental practice in this field of MR, are not
closed systems that require completion, or that lead the participant toward specific
goals. Through a conception of the apparatus as a phenomenon that — partially and
selectively — re-constitutes corporeal flows as a more performative, dynamic ‘digital’
matter, it will be possible to further disrupt the conception of a seamless alignment
between body and computational machine.

The corporeal body as digital

The seminal artwork Loops (Kaiser, Eshkar, Cunningham, Downie 2001) highlights
some of the issues that trouble the idea that corporeality can be wholly transposed to
data space. Loops is an interactive artwork that began its life in 2001 and has had
several iterations. It takes motion capture data from a solo dance performance for the
hand by Merce Cunningham, as well as his recitation from a diary entry of a visit to
New York as a young man. It couples these with generative algorithms to create real
time choreographic and sonic adaptations of Cunningham’s hand ‘dance’ data. When
opened to the realm of the digital, the precision of Cunningham’s movement is motion-
captured as data and then encoded relationally using AI algorithms. Stamatia
59

Portanova (2013) has commented on the programmatic complexity of the physical and
digital interrelations of Loops, stating that:

The primary interest of the choreography therefore is that it situates the whole
dance where it should be and in the way it should be, that is, in Cunningham’s
body as one, unique realization in the execution of a program. … When
performing Loops, Cunningham himself became, in short, an eternal object, or
an abstract idea (Portanova 2013:102).

For Portanova, the algorithms that mutate Cunningham’s physical hand data, allow
for a playing out of this data as an ‘eternal object’ in computational space. Portanova
brings an interpretation to Loops in which she reworks the physical/digital duality as
instead a modulatory composition. There is a risk here, however, that her conception
of (mathematical) abstraction might see Cunningham’s corporeal movements give way
to yet another kind of disembodiment. Hence, we need to further inquire as to the
actual relations at work between abstraction and the body as they emerge and
reconfigure through artworks such as Loops.

Anna Munster has articulated the digital materialization of the data, captured from
Cunningham’s hand, as an iteration that varies. Generating an oscillating series of
temporary formations of the body in digital space enacted algorithmically, they
capitalise on proximity to one another and sonic rhythms (Munster 2006:179-80).
Cunningham’s dance is re-composed as a digital embodiment. However, rather than
considering code here as abstract (as Portanova does), Munster sees Loops as
deploying a sophisticated technological art in which code and corporeality are mutably
engaged. Such mutability resists categorisation as either disembodied or wholly re-
embodied in the data:

Information does not simply represent a body or a corporeal experience; it


renders the emergent properties and capacities of bodies as mutable states that
are variable (and delimited) within certain parameters. (Munster 2006:180)

What is needed is an approach to the body that engages modulation as its operating
principle, inflecting the iterative movement of code as it ‘captures’ the mutable
corporeal body, and the ways bodies leave their ‘marks’ on code. Bringing this
understanding of the mutable engagement between the corporeal and the
60

computational to bear on MR, further loosens the informatic overlay approach


discussed in chapter 1, and starts to stake out a new formulation that accounts for the
shifting materialities of both bodies and data. The virtualization of dance as data, re-
frames not only choreography but also movement itself, motivated as it is by the
shifting terrain of code, algorithms and software. Critically, choreographic movements
captured as data generate material tensions that embroil dance in an intra-active
system that is performative and affective.

The performance of Loops did not involve the continuing physical co-presence of
Cunningham: instead it captured Cunningham’s motion data and algorithmically
structured the dance from that. Since the presence of my performing body will be a
continuing refrain in the software assemblages in this research, I will briefly turn to a
practice-based investigation of choreography and AR that unfolds alongside a co-
present performer. Transmedia performance works incorporating AR techniques with
co-present human performers, such as the Crack-Up (2013)10 deploy motion capture
methods to virtualize the movements of dancers. Those movements are transposed to
‘performing agents’, via custom-designed software by John McCormick. This process
illustrates a way physical movements can be extracted to the digital and then
reincorporated to the physical again through a relationally vested alignment with a
dancer during the performance.

Commenting on Erin Manning’s notion of ‘pre-acceleration’ (Manning 2009), Vincs


notes its value as an approach to movement as virtualized (by data), since it ‘gives the
mass of a dancer’s body relationality, implying directionality since acceleration is
always a vector, but one that is fluid and yet to be determined’ (Vincs 2016:4). While
data selectively transfers certain aspects of a (dancer’s) body, while leaving others
behind, what is always transferred is movement, and within movement is the
‘incipient’ potential for relations that exceed the given. Manning discusses what she
terms the ‘technogenetic’ body and its potential to take ‘the body as pure plastic
rhythm’ (Manning 2009:64). This suggests that movement itself operates aside from
the body as a containing form. As both Manning and Vincs suggest, it is movement’s
mutual and reciprocal relationality with thought, the physical body, and rhythm as an
embodied sensation, that activates the dancing of data. In Always More Than One
(2013), Manning argues that movement exists as a relational force that is
differentiated from the objects through which it moves. She says:
61

Choreography as event is the fielding of a multiplying ecology in a co-


constitutive environment. It develops in the incipiency of the in-between,
spurred by tendencies that waver between the rekindling of habit and the
tweaking of a contrast that beckons the new (Manning 2013:76).

Explicating the way in which the singular movements of dancers cease to be


individually positioned in the avant-garde choreography of William Forsythe,
Manning observes that such movements exceed the limits of the singular body in
space-time, and instead mark a coming into relation of an individual body with a
moving multitude (76-100). Manning observes this coming to alignment of bodies
brings ‘into complex constellations a rhythm that in-forms the speciations their
movement-moving creates’ (210). Embroiled in this field of movement, a singular
dancer’s body ceases to be a subject that is performing a dance: Rather, it is part of a
relational choreographic arrangement that de-privileges individual subjectivity and
shifts attention to the emergence of movement itself.

Shifting relations in the software assemblage

The question of how objects might emerge out of a relational event is addressed by
Brian Massumi through his explication of ‘semblance’ (Massumi 2011:18). As well, we
might apprehend this through the emergence of digital objects in a computational
network, such as those generated by my software assemblages. Drawing upon Susanne
Langer and Walter Benjamin’s notions of semblance, Massumi argues that semblances
are not simply static forms, existing on a page or wall devoid of vitality. For example,
in a motif of a leaf, we may perceive its leaf-ness not simply through a set of lines that
suggest the whole form, but in ‘the object’s relation to the flow not of action but of life
itself, its dynamic unfolding, the fact that it is always passing through its own potential’
(Massumi 2011:50). Massumi sees semblance as gifting a ‘vitality affect’ to abstracted
forms, a liveness that is never lost. Semblances are (vital) forms, encapsulating
affective forces in crystallised suspension. When we perceive a semblance, however,
we are not apprehending the empirically real, but a vitality affect that is immanent.
Semblances are virtual because the movement (or force) immanent to them does not
yet actualise. Massumi articulates semblance as the pure abstraction of movement:
suspended in the semblance, the vitality of the movement yet to come is nonetheless
still present. Massumi notes:
62

the “likeness” of an object to itself ... makes each singular encounter with it teem
with a belonging to others of its kind (the object as “semblance”) (2011:243).

Semblances contain an irrevocable imprint to ‘others of its kind’, connecting to past


instances of other semblances in a mode of lived abstraction. It is the lived relation
with an event that allow semblances to remodulate with the world in ‘full spectrum
perception’ (Massumi 2011:54). In my research, I will be exploring digital augments
as semblances that modulate to a lived relation in an event, motivated back to vitality
by software and signal – such as those emanating from the electromagnetic sensors of
the Leap Motion or the bio-electrical signals of living plants (in chapters 3 and 4).

In the context of the software assemblage, semblances are part of the field of co-
composition that, where, for example, gestural recursions can be used as a proposition
for re-assembly. Bringing Massumi’s proposition for the processual tension between
semblance and actualisation into the software assemblage, suggests a useful technique
for operating with relational shifts as they unfold in events. In the case of my
performances, folding semblance back into the MR space activates gestures without a
finite end, gestures that are incomplete and that move without being oriented to goals.
Such gestures allow relays of data and physicality to fold into one another, as they re-
assemble and relationally inflect. This understanding also resonates with Deleuze and
Guattari’s plane of consistency, where:

the plane of consistency does not pre-exist the movements of


deterritorialization that unravel it, the lines of flight that draw it or cause it to
rise to the surface, the becomings that compose it (1987:270).

Arising through the specific conditions posited by any given assemblage, ‘lines of
flight’ might also be thought of as material relations drawn by movements, that are
composed through the flow of matter as agential. Highly potentialized, ‘becomings’
coalesce in unique arrangements that entangle matter – and semblances – in complex
meshworks of desiring production. The emergent relational arrangements proposed
by re-incorporating semblances to a moving flow of data as choreographic elements,
will be explored shortly in Tactile Light and Tactile Sound, however the same
conception resonates throughout the software assemblages in this research. Hand
avatar forms will emerge to intra-actively co-compose in the relational space between
63

my physical hand holding the Leap Motion interface, a computation system built in
the Unity SDK, and a living projection screen/tactile surface composed of wheatgrass
plants. Critically, these assemblages will explore the idea that, when arranged in a
relational field of movement, the digital hand avatars could be considered as
choreographic elements, unfolding relationally across a field of movement.

Cunningham’s improvisation with hand and gesture have certainly influenced my own
performative techniques. Yet, of significance for my micro-gestural techniques is a
frequently neglected film by Cunningham’s talented former student, Yvonne Rainer,11
whose Hand Movie (1966) stands out as an early exploration of hand choreography on
film. Rainer was in hospital and unable to dance, except her hands were capable of
movement. The resultant 8 minute film, shot on 8mm black and white film by
cinematographer William Davis at her bedside, is a unique example of the ways in
which constraints upon the body’s normal operation can be powerfully redirected into
new physical formings. Rainer’s temporary disability produced an entirely new
choreographic proposition, one that shifted the expected relation with traditional
concepts of dance, such as the premise that the whole body must be involved
(Lambert-Beatty 2008:178).

Within a fixed frame is a sole compositional element, Rainer’s right hand, situated in
front of a sterile white wall. As the dance begins, fingers twitch away from the palm,
then move to find one another, tentatively at first, then more boldly. Tension is applied
by each element to the next. Bunching together only to flick apart, we apprehend a
hand exceeding its regular actions, without goal or aim, except to explore its own
operability as a relational set. At times, it appears that the hand could be a group of
dancers, tumbling over one another in curves and lines. In a choreographic turn that
advances a critique of dance conventions, movements are executed without a pre-
determined, overarching structure.

As an exercise to train my hands further in relational micro-gestures, I attempted to


follow along with Rainer, moving as she moved, mimicking her fingers as they leant,
stretched and bent. That exercise became an intense workout for my own hand. While
Rainer’s choreography looked deceptively simple, its simplicity masked the strenuous
practice that went into every gesture. Borrowing from Rainer’s strategies, I developed
a new cadre of performative gestures as a basis for improvisation. For Rainer (and
64

Cunningham), improvisation is non-goal oriented and operates without recourse to


fixed models or formulae (Rainer 1974:299).12

It is worth pointing out that the choreographic approach gleaned from Rainer
produces an entirely different set of hand gestures than those normally associated with
the Leap Motion, apparent in the software assemblages throughout this research. The
rotations, twists, leans, pinches and stretches of the fingers and palm, performed by
my hand, go far beyond the usual stable of goal-oriented gestures that operate with
interfaces like the Leap Motion. The nuances of such movements have found their way
into Tactile Light and Tactile Sound, but are especially amplified in the Wild Versions
(chapter 3), whose computational modules via the Unity SDK were re-designed to
amplify micro-gesture. Through the software assemblages discussed below, we see
what open-ended possibilities might emerge through an approach that considers the
nuances generated through micro-gestures as a means of inflecting performativity to
an interface such as the Leap Motion.

Matter, intra-action and apparatus: Tactile Light and Tactile Sound

Fig. 16. Tactile Light. Micro-gestures diffract across plants. Image: Simon Howden.

Full video documentation available: https://rewawright.com/2017/05/02/live-performance-


tactilelight-dec2016/
65

Fig. 17. Hand avatars blend with environment. Image: the artist

This re-purposing of commercially available interfaces is activated by the software


assemblage as a relational arrangement in two different versions – Tactile Light and
Tactile Sound. In Tactile Light, I constructed an environment using a combination of
organic and digital materials: a projector beamed my data system onto a screen and
an architectural (lattice) structure made of living wheatgrass (Figures 16-21). It took
approximately six weeks for the environment to grow, before I could attempt
performative interfacing within its architecture. When ready, it became the projection
material for the hand avatars. Encoded as moving image representation within the
onscreen space, my physical hand was visible at scale.

However, the digital augments were not figured as single hands to emerge as exact
replicas of their corporeal counterparts. Instead, the hand avatars emerged on the Fig.
18. Environment view. Hand avatars blend with wheatgrass structures. Image: the artist.

Fig. 18 (below). Environment view. Hand avatars blend with wheatgrass structures. Image: the
artist.
66

wheatgrass screen at scales that were either much larger or much smaller than ‘life’
(Figures 18 and 19). With scale manipulated and the angles of the hand models
skewed, each hand was then displayed across the wheatgrass screen, conjoined in a
visceral architecture with the next. In this way, the data of a single hand moving was
translated into a multitude of augments, with the results in the projected image
sometimes resembling their hand avatars (Figure 19) and at others, pushed to
abstraction by the diffraction of projector light on the wheatgrass itself (Figure 20).
67

Fig. 19. Hand avatar projected to wheatgrass. Image: the artist.

Fig. 20. Hand avatars tend to abstraction. Image: the artist.

Fig. 21. Grass lattice alongside my performative gestures. Image: the artist.

Diffraction – discussed in my Introduction to this thesis13 – is a wave interference


68

phenomena that is optically apprehended by the human eye. For Barad, the patterns
of interference diffraction makes as it moves through a medium, are marks of the
intra-active process enacted by one type of matter moving through another. In this
way, diffraction differs from other kinds of optical paradigms such as reflection, where
matter and its optical effects are held at a distance; for example, the metaphor of
‘holding up a mirror’ suggests already distant relations between subject and object.
Exploring Barad’s insight regarding the diffractions of matter, offers a perspective on
the co-emergence of augments, corporeal bodies and plant multiplicities in my
software assemblage version of MR, where an apparatus is a phenomenon that not
only produces intra-actions, but is an integral part of ‘the ongoing intra-activity of the
world’ (2007:73). As it passed over the living screen, light formed diffraction patterns
between my physical movements and the digitally generated hand avatars, as they
moved through each other across the wheatgrass surface. Because of the texture of the
wheatgrass itself, it was difficult to pick out the individual 3D hand models. Rather,
what was apprehended was a field of moving hands on a surface that was also field-
like. The human hand augments – emerging relationally rather than as individual 3D
hand models – were now only partially recognisable as physical hands. Projecting onto
wheatgrass diminishes digital fidelity and graphic accuracy, and prioritises the
movements of intra-active phenomena. As the light from the projection coalesced
across the wheatgrass screen, forms modulated in transient outlines across its surface.
Phenomena were liminal in the luminous glow of projection, in a scramble of layers
that blurred and muddled recursively. For the performer, relational registrations were
happening all around, in the flow of the event, with the screen only catching a small
portion of the affective vitality of the system. A habitual familiarity with my physical
hand was undercut by the emergence of unusual material-discursive structures. This
highlighted an experience of the body in the act of temporarily re-configuring itself
through recursive relays of semblances. Observed on the grass screen from visually
obtuse perspectives, the hand as a discrete entity – either physical or digital – is
problematized.

In Tactile Sound, I once again spent six weeks growing a wheatgrass sheet. Placing it
on the ground – rather than hanging as in the previous piece – two piezo sensors were
added at the base of the wheatgrass expanse. In the performance, I sit in the
wheatgrass and activate the piezo discs by mechanical pressure (Figure 22). Capturing
69

their analogue signals in real time using the software Logic Pro X14, delay and feedback
were added as I used hand gestures to work with the sound that emerged. The
mechanical pressure of my hand on the surface of the grass, generated the initial
analogue signal, however as the convolved digital signal reached my ears from Logic,
I needed to craft a response in the moment: the intra-actions between my hand, grass
and data, coalesced in to a relay that I had little control over. Performing an array of
gestures (such as swishes, shaking, flicking), my hand moved the analogue audio
signal around, as I responded to the processed output and generated new analogue
inputs. (Figures 23, 24 and 25).

Fig. 22. Sitting on the wheatgrass sheet, activating piezo sensor and Leap Motion in tandem.
Image: Simon Howden. Full documentation available: https://rewawright.com/2017/05/02/live-
performance-tactilesound-jan2017/

Relations between corporeal and digital bodies appeared as emergent phenomena at


various stages of the performance. Diffracted by the data system, visual phenomena
were captured by the LCD display as successive frames, while aural phenomena
produced by the raw signal from the piezo discs, were amplified and layered in
feedback loops by specifically choreographed gestures. Generated by the material
arrangements of non-screen based elements, my gestures highlight the relation
between physical, digital and living organic components. These operations of gesture
70

underscore tactility, where materials are responsive to the choreographic movements


of my body in the flow of performative interfacing.

Figs. 23 and 24. Swishing the grass. Movement left to right. Image: Simon Howden.

Fig. 25. MR screen capture from Unity of hand avatars. Image: the artist.

Drawing on Barad’s analysis, the gestural movements engaged during my software


assemblage performances would be considered as causal intra-actions that effected
all the instruments and other elements present in that particular configuration of the
software assemblage. Causal intra-actions, however, do not control the materialisation
71

of data through an apparatus – a point that relates to my lack of control over the
analogue signal coming out of the piezo discs described above in regard to Tactile
Sound. Here, it must be pointed out that Barad has completely re-worked the concept
of causality. Hers is not the causality of classical science. Rather, she has opened
causality to a fluid quantum understanding that validates an unpredictable dynamics
for all bodies or materialities (Barad 1996:172). ‘Scientist’, researcher or artistic
performer must all be included as a phenomenon in part generated by an apparatus
(2007:32), as much as an apparatus itself is, integrally, a phenomenon that is inflected
by these subjectivations (2007:247). Atoms and other sub-atomic phenomena that
constitute the ‘flows’ of matter are not simply objects of study, as in Newtonian physics
(Barad 1996:169), but dynamic agents in the production of knowledge. Applying this
observation to digital augments, we might say that the dynamic intra-actions that
materialise human and nonhuman phenomena in Tactile Light and Tactile Sound
emerge through differentiated flows of matter, as it diffracts through matter.

For example, an instrument such as the Leap Motion – when configured in an art
assemblage – materialises data differently than in a commercial/industrial
configuration. Engaged with unpredictable forces (such as sonic signals, projection
light) and motivated by unusual hand gestures that enhance its tactile potential, the
Leap Motion is re-purposed through performative techniques and through a
diffractive approach. Diffraction here functions in several ways: materially – for
example through the projection light hitting the wheatgrass surface in Tactile Light,
or the sonic signal passed through the wheatgrass, converted to Logic X Pro, and
further modulated with my hands in Tactile Sound15; and metaphorically – as an
antidote to the prevalent assumption (discussed in chapter 1) that digital space is a
reflective domain that should capture and wholly re-compose the physical world.

In the MR software assemblages I create, digital augments are not static objects with
a pre-programmed range of movements. This approach is made possible by using
hardware devices such as the Leap Motion as a dynamic threshold to achieve a more
performative mode of interfacing – not simply as an instrument that allows different
types of data to make technical connections. As I will explicate in the course of this
research, this materialist understanding of performative interfacing operates to lure
the digital augments into different kinds of relational movements with corporeal
bodies and plant multiplicities. The two related assemblages described above,
72

underscore the idea that the software assemblage iterates differently in variable
experimental situations, in a technical design that is speciated: both iterations share
code modules, hand controller elements, and the same species of living plants as
surface or screen. The individual elements of these software assemblages were further
speciated through the inclusion of different types of sensor (such as piezo sensors with
the Leap Motion in Tactile Sound), a re-configured wheatgrass sheet as screen (as in
Tactile Light), as well as a shift from projected light (Tactile Light) to LCD display
(Tactile Sound).

This speciation prevented the design of the artwork from solidifying, since shifting
relations within the arrangement encouraged the performer (myself) to resist the
temptation of forming habitual responses with the nonhuman elements in the
apparatus. As relays of data and the corporeal emerge and compose together, the
relations between the physical and the digital world change. These shifts of the
relation itself make possible the co-composing between the digital hand avatar and its
physical companion in the software assemblage ecology. In both Tactile Light and
Tactile Sound, the performer observes and modulates with the recursive relays, as cues
for an improvisational choreography. Considering the field of movement generated by
the hand avatars as an experimental choreographic arrangement, opened these
performances to oscillations across the corporeal and the digital space. My performing
body emerged intra-actively as a phenomena – on and off screen – in a tangle of
provisional states that oscillated with data and the organic living wheatgrass. Meeting
at different scales, magnitudes and thresholds, these performances established that
data, the body, and living plants could be productively investigated in an alternative
approach to MR that would loosen digital augments from their conventional position
as informatic overlays.

While my research practice generated a new kind of digital augment, performatively


emerging out of these material diffractions, and had re-purposed the Leap Motion
interface, a problem arose concerning the hand avatar itself. While the data system,
the sonic apparatus, and the gestures produced by the hand were working recursively,
the visible models of the hand as it appeared onscreen, were severed at the wrist. This
literally gave the augments the sense of being frozen portions of a corporeal body.
73

Partly, this phenomena was due to the limits of the Leap Motion interface, which is
designed to only track hands. However, as well, the polygonal models were closed at
the wrist, and this accentuated the effect of the cut. This issue was masked in Tactile
Light via the relationality of the augments, but was viscerally manifest on the LCD
display of Tactile Sound (Figure 26). The issue was not one of simple aesthetics, since
the chopped hand was inhibiting the capacity of the digital avatars to intra-actively re-
configure in emergent co-compositions that might challenge perceptions of isolated
augments and their indexicality. This presented a significant problem for my gestural
approach since it seemed that there was little or no modulation of the semblant forms
of the digital hand avatars with one another, but only of the signal as it was sent
through the embedded piezo sensors, or as it was modulated by my physical gestures.
The severed hand had, literally and figuratively, sliced off any potential for more
complex recursions. It became clear that using a human hand model presented a
limitation. A more fluid motif, open to recursion, was needed. Additionally, the
physical hand in combination with a matching digital avatar, could be construed as a
‘presence metaphor,’ one of the facets of Milgram and Kishino’s taxonomy I had earlier
critiqued.

Fig. 26. On the LCD display, the chopped hand in stark relief. Image: the artist.

In chapter 3, I will discuss the impact of choreographic improvisation in the ‘wild’ and
how this led to the complete removal of these figural hand models in favour of a
fusional hybrid augment that human corporeal forms with abstracted representations
of plants found in the natural bush ecology that was the setting for the Wild Versions
(2017). There, we will examine how shifting the digital hand avatars away from
74

figuration as human, generated more complex phenomena in that software


assemblage, and led to a conception of the environment that would modulate in
graphical resonance with the digital hand forms themselves.

This chapter has enacted via practice-based research, a dynamic materialist


understanding of MR as a software assemblage. This marks a shift away from past and
current taxonomic understandings of MR inherited from engineering,
commercial/industrial MR, as well as in areas of computer science discourse relating
to MR – understandings that are inadequate for augmentation in the field of media
art. The software assemblages presented in this chapter, contributed to a material
investigation of what corporeal-digital-organic formations might become when
envisaged through a MR that develops techniques for different modes of performative
interfacing.

1 Basic information on this plant, can be retrieved from


https://en.wikipedia.org/wiki/Wheatgrass. (accessed 3 June 2015).
2 the Augmented Hand Series was commissioned by the Cinekid Festival Amsterdam, 2014.
3 Retrieved from http://openendedgroup.com (accessed 13 July 2015).
4 Duration 5 minutes, black and white, silent, 8mm film. Camerawork by William Davis.
5 The actual is highly complex notion which there is no scope to fully investigate here. In a

Thousand Plateaus (1987), it is also aligned with stratification. This marks a subtle shift from
Deleuze's earlier work on the concept. Bonta and Protevi explain: ' the actual is the aspect
complex systems display when, in a steady state, they are locked into a basin of attraction. Actual,
stratified systems hide the intensive nature of the morphogenetic processes that gave rise to them
... . It is as if the actual were the congealing of the intensive and the burying of the virtual'
(2004:49).
6 Following Susanne Witzgall, it should be noted that assemblage, as articulated by Deleuze and

Guattari, and apparatus, from Barad, are quite similar constructions. Barad’s formulation is
underscored by a conception of matter as subatomic (from quantum physics), giving rise to the
notion of intra-action as the processual relation that generates new quantum and physical
phenomena. Deleuze and Guattari were very interested in complexity theory and the molecular,
and this interest has rubbed off on their conception of the machinic assemblage as articulated in
a Thousand Plateaus.
7 Visitor reactions to Levin et. al. Augmented Hand Series from the Cinekid Festival Amsterdam

2014. Retrieved from https://www.youtube.com/watch?v=_PySdQiRN3U(accessed 2 March


2016).
8 Configuration details can be found in the Leap C API, retrieved from

https://developer.leapmotion.com/documentation/v4/vrar.html. (accessed 12 April 2018).


9 Recently, Grand Theft Auto V’s AI environment has proved useful for researchers exploring the

emergent field of self-driving cars. See Martinez et. al (2017).


10 The Crack-Up was performed in Melbourne in 2014. This evening length performance was

created by Kim Vincs, John McCormick, Steph Hutchison, and Alison Bennett, amongst other
75

collaborators. Project page: http://motionlab.deakin.edu.au/portfolio/the-crack-up/ (accessed 3


July 2018).
11 Rainer, known to practice radical choreography based on everyday life, developed

improvisational techniques that used everyday action rather than classically vested manoeuvers.
Like her teacher Merce Cunningham, Rainer was inspired by chance operations, eschewing the
traditional models of choreography found in classical dance in favour of movements that were
politically charged and experimental. Repetition of processual actions replaces conventional pre-
formatted dances. In Rainer’s improvisation techniques, performers draw on a repertoire of
unfamiliar gestures and movements, rather than repeating a model conceived in advance by the
choreographer (Rainer 1974:87).
12 Rainer describes her choreographic method: ‘Improvisation, in my way of handling it, demands

a constant connection with some thing – object, action, and/or mood – in a situation. The more
connections that are established the easier it is to proceed’ (Rainer 1974:299).
13 In chapters 3 and 4, I explore Haraway’s concept of diffraction in greater detail, as well as

Barad’s elaboration on that approach.


14 Retrieved from https://www.apple.com/au/logic-pro/ (accessed 17 July 2016).
15 Refer to Appendix 1, for details of accompanying video documentation of Tactile Sound to

apprehend this phenomena.


76

CHAPTER 3

Emergence, entanglement, and signal

Attending to the relations that oscillate matter with materiality in the software
assemblage has generated artworks that reveal a more performative side to digital
augments than is conventionally manifest in commercial/industrial and computer
science versions of MR. In this chapter, attention will be paid to the potential for
affective relations between nonhuman forces, leaning toward signal modulations
between digital augments, data system, infrared signal, and plants. Embarking on an
analysis that traces entanglements of matter and materials in three post-studio
software assemblages, this chapter will follow my practice of performative interfacing
as it moves from the established indoor situations of Tactile Light and Tactile Sound,
through to less controlled outdoor environments, where wider ecologies will be
utilised as expressive spaces. Working alongside natural ecologies, I extend the ways
in which living plants might become agential forces in the software assemblage: firstly,
in the Wild Versions, a mobile MR kit traces an affective engagement with a natural
ecology; and, secondly, in the two Tactile Signal performances, plants are engaged as
signal producing bodies that I co-compose with.

In the Wild Versions – made in the unpredictable environment of the bush– I explore
the potential of a post-studio approach that nests the software assemblage in a wild
ecology. In the Tactile Signal performances, bio-electrical signals emitted by plants
are incorporated as agential elements that co-compose the sonics of the performance.
In the field of augmented audio, a sonic augment is broadly defined as an artificial
sound added to a more direct source (Cohen, Aoki, Koizumi 1993; Mariette 2013).
Through the Tactile Signal performances, I will be offering an alternative approach
where audio as augmentation is generated through the analogue bio-electrical signals
produced by plants, digitally converted by a computational network. I will be
examining experimental musician Miya Masaoka’s practice of utilising plant bio-
electrical signals in performance via a Body Area Network (BAN), where sonics that
77

emerge from plants are able to be modulated by the human body. I will be drawing on
Masaoka’s techniques for working with the amplified bio-electrical signals, suggesting
it as a way to extend current approaches to augmented audio in MR.

One of the new configurations explored in my work with the software assemblages at
this stage of the research, is a head mounted permutation of the Leap Motion interface
where we actually look through its infrared camera sensors. In this configuration,
digital augments passed from the Unity SDK are composited in real time on the
distorted, grayscale picture plane generated by the infrared camera’s image stream:
instead of the coloured pixels of the video stream, we now see a heat-mapped
rendering of the physical world. The significance of this interface permutation for my
research proposition is that the infrared camera view is a stark contrast to the colour
video images provided by the webcam stream, used exclusively until now in my
previous software assemblages. It affords a markedly different view of the subjects
framed in the display window and will be used to question the need for a clear
informatic window which, as we saw in chapter 1, is a standard AR/MR visual practice.
Before setting off on an exploration of new techniques for performative interfacing,
the following section will examine the final iteration of the software assemblage in
which the technical elements of the interface are entirely handheld. In the Wild
Versions, augments were released into a natural environment, raising the more
general relations of how media and organic elements might work together in a more
affective configuration.

Augments in an expanded field: post studio practice and MR

While MR experiences are most often set indoors – with devices tethered to
computational networks and other instruments/interfaces – there is also a growing
cadre of techniques for use outdoors, operating in an expanded field of movement.
Already noted in chapter 1 were commercial and artistic examples from mobile AR,
using wireless networks to afford the geo-location of augments. In such examples, a
human experience of an environment unfolds through a cartographic link – formed
via GPS – with the physical environment, approached as a data object that is subjected
to procedures of mapping. In my approach where augments are thought as intra-active
and performative, it did not seem especially useful to follow cartographic and pre-
determined models, since this did not encourage emergent augmented relations.
78

My approach references late 20th twentieth century site-specific art more than
computational practice. Emerging in the outdoors, MR in this expanded field could be
approached as a post-studio practice that connects data networks to existing ecological
materialities already present at the site. Developed through the thinking and making
of many artists since the 1960s – but notably in the work of Robert Smithson and John
Baldessari – post-studio practices in fine art de-privilege the object, through the act of
producing work at a location determined by the artist, outside of the gallery system
(Buren and Repensek 1979; Ferguson, Tucker and Baldessari 1990). Working
ephemerally with geological and organic matter, Smithson developed processes of
material re-assembly that intervened at remote sites, away from the reach of an art
audience. As art theorists have argued, seminal works such as Spiral Jetty and
Yucatan Mirror Displacements 1-9 were known largely through their documentation,
not as objects (Kwon 2004; Housefield 2007).1

In the Wild Versions 1-4, I similarly adopted a post-studio conception of practice,


producing a film that documented the performance itself, enacted in the natural
environment of A.H. Reed Memorial Park, a bush reserve in Whangarei, a northern
region of Aotearoa-New Zealand’s North Island (Figure 28). It should be noted that I
do not treat ‘nature’ as a separate sphere but continue to understand ‘bush’ sites in
terms of deep entanglements and intra-actions across all participants in an
assemblage. As I will show, both the Wild Versions and Tactile Signal: Yucca Relay
develop post-studio MR assemblages, that manifest from an entangled understanding
of humans and nature.

Emergence and entanglement in the Wild Versions 1-4

Immersed in the ecology of a bush reserve, a digital camera frames a wide shot and
awaits movement. Slowly, a hand comes into frame. It is swiftly followed by a
multitude of hand avatars; augments that are conjoined to the initial (physical) hand
by a software-based tracking system (Figures 29-32). Emergence is between digital
hand avatars and physical hand, as mediated by the ‘signal inertia’ produced by a
webcam stream (a phenomena that will be explored further shortly). There is always
co-emergence here. Entanglement describes the relational trajectory of all these
different hands, as they emerge through the computational system as well as in the
physical world. Hand positions and gestures were developed with attention to
79

potential emergence in the wider field, not simply as elements within a display frame.

Fig. 28. A location shot at A.H Reed Memorial Park, Whangarei. Image: the artist.
80

Working with a small, self-designed, mobile MR system (Figure 35) brought the
software assemblage to an uncontrolled location with its own agential presence.
Recording of the performances was achieved via the iPhone 8 Plus screen record
function, with the device running the Unity Remote app, connected directly to my
laptop which was playing the Unity scene live. Recording was silent, however in the
studio I overdubbed sound generated from the bio-electrical signal of plants at the site.
The process for generating and capturing of bio-electrical signals will be closely
examined later in this chapter, so will not be explicated in this section.

The Wild Versions live records exposed my processes of performative interfacing to


the impact of an immersive organic environment. Here, we might revisit DeLanda’s
articulation of spatial expressivity, raised in chapter 1. DeLanda perceives nonhuman
spaces – such as natural environments – as embedded with their own affective
potential, revealed to the (human) animals who must negotiate its parameters:

Spatial expressivity has another aspect: the relational one. The ecological space
inhabited by an animal expresses, through the arrangement of surface layouts,
the capacities it has to affect, and be affected by, the animal (2007:103).

Fig. 29. Still image from Wild Version 1. Image: the artist.

He traces spatial expressivity for its influence on behaviour, as a relational field that
can be considered an affective force of the nonhuman. Considering this with respect
81

to the Wild Versions, affords an understanding that the outdoor environment


contributes to co-shaping the performance by broadly influencing my movements and
gestures. The affective presence of the space contributes to the emergence of augments
in the performative event. From a human centred perspective, agency is not generally
afforded to space itself; although it can be acknowledged that animals have their own
agency, agency seems a strong designation for an ‘arrangement of surface layouts’.
However, returning to Barad’s agential realist framework, where nonhuman matter is
bubbling with its own agential processes we know that matter (such as that found in
the ‘wild’) is shifting at quantum scales in every moment, and this is the concept I was
working with throughout the Wild Versions. Performing the Wild Versions in the bush
environment, I became immersed in that location through a more general cross-modal
sensory engagement. Working with the software assemblage at the bush reserve
required that I remained open to the shifting relations of the site– light, weather, and
other organic elements. For example, lens flare caused constant changes in the video
stream signal as seen in numerous screen images (such as Figure 34). As the
environment became an expressive space, the hand improvisations were
choreographed in relation to trees, rocks, a stream, decaying tree trunks and other
elements as these surrounded the performance in this enfolding environment.

Fig. 30. Screen image from Wild Version 2. Image: the artist.

Acknowledging the expressivity of non-human spaces also demands we see spaces


such as the bush environment as matter that generates intra-active phenomena. In
Barad’s agential realist account of intra-action, in common with materialist theory,
humans are a ‘phenomena’ produced through enactive processes, and therefore are
not privileged in relation to other agential configurations – such as those that compose
82

the materiality of non-human spaces (2007:33). For Barad, non-human phenomena


are the concrete results of the causal intra-actions of matter (Barad 2007:175),
invested with specific situated and embodied forms of agency. Since humans, in
Barad’s agential realism, are merely one potential configuration of matter, we can
begin to perceive the ways in which non-human forces might influence a human
experience of ‘reality.’ Performatively intra-acting with augments in an extensive
outdoor ecology, resulted in the need to invent new tactics to articulate the shifting
relations with nonhuman matter.

Fig. 31 and 32. Sequential screen images from Wild Version 2. Image: the artist.
83

The process of working with the site as an expressive space began before the recording
of these performances, and included the photographic studies made of trees that
would be later used to create digital hand avatars. I began to see how the hand avatars
could visually enfold aspects of the natural ecology, so I crafted graphics based on
plants found in the local area, using the Speedtree Modeller.2 Despite being based
graphically on plant forms, the digital hand avatars are only quasi-figural: as surface
textures in the act of relation, these graphical surfaces tend to abstraction. My concept
was not to simply re-produce a modelled replica of vegetation as a hand. Rather, it was
to allow the hand avatars to resonate as an expressive element that could be read as a
semblance of a physical hand, as well as of the environment (such as the tree fern
avatar with actual tree fern in background, Figures 33 and 34). Material and
choreographic relations emphasise co-composition between data system and physical
hand, but also make apparent the living plants in this environment by way of the
graphical hand models.

Technically, the plant forms based on the digital hand avatars are two-dimensional
image textures; a graphics layer attached to a physics model.3 As such they are
completely static until activated by the physics model: it is the connection these
surfaces make through code programmed in C# that generates the conditions for their
emergence and entanglement. The hand models are connected in a specific way,
through a technique that I have developed across this practice. In this technique,
multiple Leap Motion hand controllers are placed with overlapping three-dimensional
coordinates, in one Unity scene. This produces a temporally shifting digital topology
where localized points coincide with one another, then visibly entangle. Emergence, in
the digital hand avatars that sit across these three-dimensional coordinates, generates
the potential for relational movements with the augments as choreographic data
objects. In motion, the hand avatars emerge in tandem yet at obtuse angles and
framings. Attention was also given to forming temporary conjunctions between the
fingers and palm. The entry point, angle, presentation and position of the physical
hand, sets many of the conditions for the emergence of the augments. Pressing against
and through one another, they spill across the surface of the webcam stream, drifting
in mid-air and seemingly unrestricted by framing conventions.
84

Fig. 33. Screen capture from the Wild Versions 4. Image: the artist.

Fig. 34. Screen capture from the Wild Versions 4, showing light diffractions.

Image: the artist.

During live performance, an attentiveness to the micro-gestures of my hand becomes


part of the negotiation with the digital and multiplied hand avatars. My strategies for
performative interfacing with the entangled digital hand avatars are improvisational,
as I respond to the emergent hands as they pass out of Unity. Then, my response
circulates back to Unity way of the webcam stream, in a recursive relay of data. These
recursions generate a fluctuating field of movement that my body-brain (of the
performer) has difficulty separating, and instead must flow with, in the moment of
performative interfacing. Even the human hand that initiated the tracking process, has
85

trouble discerning which set of relational coordinates to feel its way toward next.
Physical moves are tentative gestures, as I attempt to apprehend the position,
orientation and scale of the micro-temporally generated hands, and craft a durational
response.

‘Signal inertia’ in the Wild Versions

My hand is responsible for the initial gesture that is tracked by the Leap Motion SDK.
However, perceptually and then performatively, the opposite is true for my outdoor
MR system, which had inserted the phenomena of webcam delay into this software
assemblage. The parsing back and forth of data at less than optimal bus speeds
produced the conditions for ‘signal inertia’, caused by a delay in the processing of the
webcam stream. This meant that my real hand appeared on screen, via the webcam
signal, about half a second after the digital hand avatar, since the micro-temporal
tracking provided by the Leap Motion SDK effectively ‘beat’ the slower webcam image
to Unity. In the recorded performances, a visible gap emerges between my physical
hand and the digital hand avatars.4 My response to this temporal mismatch was not to
attempt to return with a more powerfully designed mobile system. Instead, I saw the
opportunity to approach this delay as an affective force exerted by technical elements
that could be explored for its expressive potential. To facilitate unexpected intra-
actions between the corporeal and digital in the Wild Versions films as iterative events,
I worked with the materialities of signal. Suspended in the updating flow of webcam
data, the conjunction of digital hand avatars, physical hand on the screen, and hand in
physical space, not only produced a more relational assemblage than in Tactile Light
and Tactile Sound, but also investigated the affective potential of signal itself.

My improvisations worked with the signal inertia as a nonhuman yet expressive force,
where the delayed image stream came to mediate the digital hand avatars as
choreographic data objects. Relationally moving to screen-based visual cues, such as
position, gesture, and movement, my single physical hand, improvised along with the
digital multitude. Real time tracking was disrupted with digital avatars and images of
physical hands occupying the same screen space, yet persistently temporally separate.
This gave the visible impression that the digital avatars had an agential presence. In
this way, the contingent emergence of signal inertia became a compositional element.
The play between relational movements as they emerge between the corporeal body,
86

the data system, and the natural ecology provided an opportunity to allow the
augments outside a synchronised tracking system.

Fig. 35. Sketch of my homemade mobile system: webcam, laptop, iPhone 8 plus (the 'frame')
running Unity Remote app. Image: artist’s workbook.

Exploring the affective capacities of signal – as opposed to treating signal as a


‘seamless’ technical mechanism – further problematizes the screen-based definition
of MR. A return to conventional positions on MR will highlight what is at stake, and
why a re-framing of the neglected issue of signal is pertinent to my research. Paul
Milgram summarizes the conventional position on so-called ‘raw video’ images (or
video signals) in MR, as:

Mixed Reality (MR) refers to the general case of combining images along a
continuum which ranges from purely real (unmodelled) data, such as raw video
images, to completely virtual images, based on modelled environments.
(Milgram 2006:1)

Such a definition occludes the significant role of signal in mediating an image stream,
87

instead categorising signal as an unproblematized transmission of the real. While


Milgram and Kishino (1994:1327) do distinguish between direct and non-direct
viewing of the ‘real’, the underlying assumption is still that it is desirable to re-create
as realistic an image stream as possible in ‘virtual’ space.5 For Milgram, the display
screen is simply a container that frames a synthesised ‘reality’. From the perspective
of signal inertia, it seems reductive to define video images as ‘purely real’ since as we
have seen, the data stream itself has a materiality that can be ‘modelled’ by qualitative
aspects like signal strengths, lighting conditions, network weaknesses and electrical
‘flow’. Furthermore, a video image stream is a translation of light as a sequence of
interlaced red, blue, and green pixels: it is a misrepresentation to present it as raw
unmediated reality.

To investigate the affective potential of signal in relation to augments and corporeality


in greater detail, I decided to investigate the Leap Motion’s conjunction with the head-
mounted HTC Vive desktop VR device. This made the Leap Motion accessible as a
look-through camera, whose image stream afforded an experience of the
electromagnetic impulses of the infrared signal itself. Signal is far from ‘unmodelled’,
and in fact is diffracted by the electromagnetic radiation it attempts to map. Rather
than being supported by the engineered clarity of the ‘raw’ video image stream, I
speculated that digital augments would emerge enfolded with the infrared signal itself.

From hand held to head mounted

In early 2017, the Leap Motion company introduced a custom head mount whereby
the gestural interface could attach to the front of a Virtual Reality Head Mounted
Display (VR HMD)6 to be utilised as a camera one could look-through. The use of a VR
HMD for non-immersive display conveys a mix of camera stream – a framing which
also includes hand gestures – and signal (Figure 36). While in both handheld and head
mounted uses of the Leap Motion, infrared data is sent to the Unity SDK, in hand held
use the signal that sends the data is masked out. In previous software assemblages,
digital augments were transposed as elements within a clean looking webcam stream,
not on the actual infrared stream to which they were tracked. Since the infrared signal
was invisible, the augments rested against the webcam image stream, in a screen space
that still resembled physical space. In this permutation, that has materially changed.
88

Fig. 36. Screen capture from Leap Motion/Vive display. Image: the artist.

However, when incorporated to a VR HMD, the Leap Motion’s camera sensors can be
accessed by the Leap Motion API and passed through the HMD’s viewport.7 Using the
image pass-through feature renders the image stream from the infrared sensor as a
visible background plane. Operating at a threshold normally imperceptible to human
vision, infrared radiation requires an interface to materialise its electromagnetic field
as visible light (Figure 37). Now, digital augments emerge on this ground/field of
electromagnetic radiation. Through the Tactile Signal performances explicated soon,
I will suggest that this signal activates electromagnetic radiation as a disruptive
threshold through which to diffract the visual plane.

Fig. 37. Diagram of the Electromagnetic Spectrum. Credit: NASA’s Imagine the Universe.
89

Important for the Tactile Signal performances, is that the point-of-view afforded by
the HMD cannot be considered as a ‘window’ connecting some artificially purified
notion of ‘raw’ unmodelled data (Milgram 2006:1). Looking through the Leap Motion
at the infrared signal disrupts a clear correspondence between real and virtual worlds,
interrogating the necessity of this as a design feature of MR. The actual pixelated plane
of the infrared signal is replete with diffraction patterns caused by the passage of
electromagnetic radiation through the Leap Motion device. As captured by the data
system, the infrared signal is a digital record of the diffraction of heat as it travels in
electromagnetic waves around objects in the physical world. When the infrared signal
arrives at the surface of the HMD display, it is akin to a visual archive of the
enfoldment of heat as digital matter, which is then further diffracted as it meets the
digital augments.

Furthermore, the passage of the infrared signal, where heat is converted to image
pixels, beckons an extension of the materialist analysis applied to the software
assemblage so far, to include the disruptive interference patterns produced by
‘signaletic’ media. Bodil Stavning Thomsen (2012) describes the qualitive elements of
signaletic media as ‘the ability to create affective encounters within the folded
operation of the signal’ (2012:8). Such a capacity to generate new encounters between
bodies and data at the level of signal now becomes an aspect of the software
assemblage formulation, since it affords an understanding of the entire augmented
space including its field as performative materials. The infrared signal renders its
image stream in a visceral materiality that is not often associated with MR: such a
signal needs to be analysed as a nonhuman agential force that can become affective
within the software assemblage.

This materiality of signal affords a perspective on the processual transmission of data


across networked arrangements. Earlier in this chapter, I showed that the relationality
generated by signal inertia could be used affectively, co-composed with to generate a
more performative type of digital augment. In relation to my two Tactile Signal
performances, it will be argued that looking through the Leap Motion interface as a
camera, generates a digital augment that intra-actively emerges with the infrared
signal at an electromagnetic threshold. Moving with this threshold, I will be
90

interrogating the shifts that signal produces as it emerges and entangles with my
performing body.

Touching the signal: the Tactile Signal performances

Through the Tactile Signal performances, I will explore the infrared signal as
generating a highly problematized zone of complex intra-active movements and omni-
directional material relations. As a vector through which we might interpret Barad’s
conception of apparatus as it impacts on augmented materials, I will be using the head-
mounted permutation of the Leap Motion to co-compose a differentiated state of
embodiment for the performer, who becomes the ‘camera’. I will also be offering an
alternate view of sonics in MR, by utilising the bio-electrical signal coming from living
plants as a form of augmented audio. In these performances, I use my hand gestures
to diffract two types of signal through different designs of the assemblage. Firstly, in
the Yucca Relay performance, I use my hand gestures to generate digital augments in
response to the bio-electrical signal from a living yucca tree.

Secondly, in Agave Relay, I intra-actively modulate the bio-electrical signal from a


living agave plant, using specifically developed hand gestures to modulate that signal.
While both performances are materially differentiated from one another – deploying
technical devices, software, plants, and my body in iterative arrangements (see Figures
38 and 39 for signal path diagrams) – they also have common elements of design. For
example, in both software assemblages, plants are fitted with electrodes attached to a
MIDI sprout sensor8 through which their bio-electrical signal passes. Measured as
voltage by the sensor, the bio-electrical signal is transposed in real time as a MIDI
sequence, arriving at the sound design programme Logic Pro X. Translated as musical
notes, this signal is interpreted by a virtual sound bank of noise, designed by myself.9

Bringing Barad’s understanding of matter as intra-active to bear on augmented


material has led more generally to an approach where augments are considered as
intra-active forces that coalesce with other modalities of matter, such as the corporeal
body, infrared and the bio-electrical signals from living plants. This perspective also
engages diffraction as a process which generates patterns of interference, through the
relational forces exerted on one mode of matter– digital, flesh or plant – by another.

The world visible through the HMD implicates the performer in a mix of realities
91

where the ‘real’ is no longer rendered via a video image stream as an analogue of
human vision. Attaching a VR HMD precipitates a profound perceptual shift in my
own sense of embodiment while intra-acting with the digital augments. To convey a
visual image to the wearer, the HTC Vive uses stereoscopic vision delivered through a
screen display with a refresh rate of 90Hz. However, the Vive is not only a stereoscopic
apparatus: it also uses a laser-based positional tracking technique, whose sensors
transpose body movements to digital space.10 However, I use the VR HMD in a non-
immersive system, with access to a mixed digital-physical world view through the Leap
Motion’s camera. Yet this camera view is distorted and grayscale, replete with
disruptive pixels that are certainly not imaging the world at 90Hz. Here, material
collisions – such as between the disruptive static of the infrared signal, the crisp digital
augments, and the human hand materialised as signal (Figures 40 and 41) – can be
understood as different relational thresholds of matter enfolded in the temporality of
the infrared signal as it passes through the Leap Motion/Vive apparatus.

Repositioning the Leap Motion from my hand to my face, also brings into sharp relief
the difference between my experience of the performance and that of an audience.
While this split is always present, since there is continually a performer emerging via
the performance itself as well as whatever subject positions are affectively present for
an audience, the HMD amplifies this relation. For example, in the Agave Relay
performance – discussed in detail later in this chapter – I frame the HMD camera view
as the digital augments materialize and are modulated through my hand gestures with
the plant’s bio-electrical signal. In this process, I am entirely implicated as part of this
apparatus. At the same time, the HMD view is sent to a large external screen display,
perceived as a video rendering by the audience. Thus, at least two parallel perceptual
viewpoints are at play - one experienced as embodied by the performer as they
negotiate the tasks the apparatus requires, and one perceived by the audience as a
screen rendering of the embodied process. In such networks, MR is more than what
happens on the screen. Rather, the ‘mixing’ that happens in between relational forces,
generates an emergent MR, across the interferences of the nonhuman organic, the
signaletic and the corporeal. The affective movements of the corporeal, organic, micro-
temporal, and so forth, have the capacity to precipitate new senses of embodiment for
the performer.
92

Figs. 38 and 39. Signal path diagrams for Yucca Relay and Agave Relay.

Images: artist’s workbook

Toward a generative sonics of augmentation

An important processual shift outlined in this chapter, is the transition from treating
plants as material (such as in chapter 2) to involving the bio-electrical potential of
plants as co-compositional in an assemblage. For example, in Tactile Sound, sonics
were generated by mechanical pressure I placed on piezo sensors using gestures. While
the piezo sensors were embedded in the wheatgrass, this technique did not record any
actual bio-electrical signals: it simply captured mechanical pressure as sound. By
contrast, the two Tactile Signal performances now under discussion activate plants as
signal producers. The significance of an approach that considers plants as more than
materials, lies with how a plant’s bio-electrical signal might modulate the digital
93

augments. It will be argued that this modulation challenges the informatic overlay
approach as it has been applied to augmented audio.

In conventional approaches to AR, sound as it underscores visual augments should


‘create a convincing sensory impression’, such as in virtual or augmented historical
environments (Weinzierl and Lepa 2017:68). In such approaches, audio is an
accompaniment to digital augments, reinforcing the primacy of visual perception.
Additionally, the field of Augmented Reality Audio offers a set of definitions so vague
as to be of limited use value to either engineering or media art practice. For example,
in the article “Human factors research in audio augmented reality”, Nicholas Mariette
(2013) applies Azuma’s three characteristics for visual AR to the nascent field of
augmented audio. This standardises AR audio to broadly being the addition of
artificial sound to ‘real world’ sound, and from this basis Mariette attempts to
distinguish a ‘taxonomy’ for AR audio (2013:14). This approach has obvious
conceptual limitations, since the technical operation of overlaying artificial sound on
real sound, already encompasses a large range of existing practices, including broad
general cases of digitised sound played in a room environment emanating from
everyday equipment such as speakers.

While the technique of binaural recording – a cornerstone of augmented audio


practice (Mueller and Karau 2002; Harma et.al. 2003) – has been used in an
expressive and unique way to add nuanced sound to mobile AR artworks – for example
in Cardiff and Miller’s the City of Forking Paths (Sydney, 2014-2017) discussed in
chapter 1 – this technique is more suited to devices such as smartphones that have
headphone style inputs. In my performative situations, sonics are amplified in
architectural environments. To experiment with sonic augmentation that is not only
amplified, but responsive to the real time intra-actions of performative interfacing, I
needed to explore avenues for audio modulation.

To offer a productive alternative, I examined the concept of the Body Area Network
(BAN). In engineering, BAN research, investigates the human body as a transmission
channel for data, where the bio-electrical charges that circulate in fields around the
surface of the skin are harnessed as energy sources able to power wearable devices
(Zimmerman 1995; Fujii and Okumura 2012). In the performance practice of laser
koto musician and BAN interface designer Miya Masaoka, we find an exciting
94

experimental trajectory that modulates plants, data and the human body. Since the
1990s, Masaoka has worked intensively with plants as co-composers, creating
ensemble musical pieces for live performance. Her initial plant-human interfaces – a
type of BAN established to sense physiological electrical data near the body – were
developed with BAN pioneer Tom Zimmerman. These incorporate body-plant-energy
networks to generate live musical performances as well as scored compositions.

In performances such as Pieces for Plants (2000-2012)11, Masaoka networked a


specially prepared philodendron by attaching electrodes to its leaves, to react in
concert with electrical fields around her body. Using the BAN as an unstable
formation, Masaoka modulated with the emergent energy fields. Masaoka has shared
her perspective on plants as responsive entities:

Working with plants in my studio, I was astonished by their ability to respond


consistently to my walking in and out of the room ... I [became] increasingly
aware of the sensitivity of the plant, and gained greater empathy and awareness
of their behaviour, needs and responses (Miya Masaoka 2018).12

Clearly, Masaoka considers the responses and behaviour from plants as an element
that affectively co-composes the sound piece, citing their ability to emit different
signals according to their environment. Masaoka’s creative use of BAN’s can be
thought of as ways to pose provocations to the practice of augmented audio. I have
rethought some of her artworks in the context of performing augmented audio by
introducing issues of signal flow and modulation to the Tactile Signal performances
and to Contact Zone (in the next chapter). This sonic flow is not under the full control
of the performer, yet it can be adeptly modulated using both carefully placed tactile
gestures and subtle body movements. The bio-electrical signal of the plants could be
considered as a kind of micro-impulse, a bubbling ground of voltage that conveys a
mode of nonhuman affect. Through attention to the micro-impulses in a modulating
bio-electrical signal, aural perception shifts focus from the musical notes played as
expressive of the intentionality of the performer, toward the sonic forces manifested
by the impulses emitted by the plant. Mobilising Masaoka’s approach to assist in the
development of a generative sonics of augmentation, arrived at through the
articulation of real time signal flow, converges both the gestural movements of my
physical hands, and digital augments as they pass through the software assemblages,
95

and are further diffracted through processes of performative interfacing.

Figs. 40 and 41 . Screen images via HMD in Yucca Relay performance. Image: artist’s workbook

The body-tree-data circuit in my backyard

A large yucca tree resides in a backyard in Enmore, Sydney. For the purposes of this
performance, it has been fitted with electrodes attached to a MIDI Sprout sensor
whose data will be sonified using Logic Pro X. I approach, wearing the Leap
Motion/Vive HMD: through a point of view shot, the tree is framed. In advance of the
performance, a set of choreographic parameters were determined, establishing a loose
organisational structure for how this software assemblage might unfold. I would wait
for a bio-electrical signal emitted by the yucca. Then, I improvise a gestural response
96

that activates digital augments via the Leap Motion/Vive combination (Figures 40 and
41). This response would be enacted while holding an ultrasonic microphone, whose
transducer captures the normally inaudible sound of my hand, as it moves through the
air in front of the tree (Figures 42 and 43). Then, the process repeats until the end of
the performance, itself durational depending on the level of bio-electrical activity from
the tree and my threshold of attention with the system.

Since the tree was operating on a phenological time scale, waiting between five and ten
seconds for a sonic emission was normal, sometimes up to fifteen, a process over which
I had no control.13 Once the yucca emitted a signal, intra-actions between body, tree,
and data were open ended and durational. I tune carefully to the sonified bio-electrical
signal as to sculpt a gestural response. Holding the ultrasonic microphone in my right
hand (Figure 42) does not obscure the tracking system of the Leap Motion/Vive
apparatus, which mostly registers my hand's outline. This means I can agitate the
ultrasonics that arise from my hand gestures, in response to the yucca's signal.

In previous software assemblages in this research, the graphic models applied to the
hand avatars were added to a structural 'mesh' in the code module.14 The mesh was
based on a human hand form, and the graphic model was an attached component.
Experimenting with shifting possibilities in Yucca Relay, I choose to remove the mesh,
instead attaching the graphic models to a line renderer.15 My newly designed meshless
avatars do not track the outline of the physical hand, but instead are attached to the
co-ordinates of my palm and wrist. Now, digital augments react more fluidly, rather
than being attached to its avatar as a condition for emergence (Figures 44 and 45).
Since this new technique also prevented my physical hand from having a figural digital
companion to track during performative interfacing, it precipitated a shift in my
approach to that process as well. My hand gestures became more mobile, since I
needed to worry less about breaking the tracking due to excessive movement. My
embodied perception during performative interfacing was further challenged, since
the meshless avatars were less controllable than in the previous module. Although I
had only one line renderer attached to each hand, the Leap Motion was sending many
more coordinates to the Unity SDK than the line render could process. This resulted
in lines emerging in quite random patterns of interference across the Unity scene itself
(and, of course, in my HMD). The meshless approach and its challenges will be further
explored in the next chapter, during my solo performance portion of Contact Zone.
97

Fig. 42. Yucca Relay's performative interfacing. Fig. 43. Tree with white electrodes.

Flux, threshold and scale were critical operations in this performative interfacing. For
example, as I reacted to the Yucca’s signal, gestures from my hands sketched graphic
flows in digital space, that in the physical world troubled electromagnetic frequencies
picked up by the ultra-sonic microphone. Or, as my head moved with the weight of the
HMD, the framing shifted toward a new orientation, tracking broke with the Leap
Motion SDK, sending augments across the screen and out of view. Or, the ultra-sonic
microphone, capturing interference from the MIDI sequence generated by the tree’s
bio-electrical signal, relayed those wave forms as an auralization of normally inaudible
frequencies. Digital augments were bright and green, enfolded to the dark materiality
of the infrared signal. A multitude of signals – infrared, bio-electrical, and digital–
circulate through this software assemblage, in contesting relays that are aural, visual
and embodied. In this situation, I negotiated a response to the bio-electrical signal as
it emerged, with the added perceptual challenge of remaining attentive toward the
shifting frame of the point of view shot from the HMD.
98

Fig. 44. Tactile Signal: Yucca Relay performance, meshless avatars. Image: the artist.

Fig. 45. Tactile Signal: Yucca Relay performance, meshless avatars. Image: the artist.
99

While Yucca Relay had produced a situation where a living tree became a co-
composer, still, the piece was limited by its ‘call and response’ format, where
essentially two parallel systems emerged alongside one another. To further explore
opportunities for performative interfacing with plants and augments, I needed to
design a system that was interwoven rather than parallel, and operated with the
recursive relays generated by the data and signaletic materialities in a modulating
meshwork: a system that would allow for a greater circulation of patterns of
interference between digital augments, sonic signals and my performing body. The
result of that endeavour is detailed in the next section.

Composing augments with an agave

An agave plant sits in a gallery space, emitting bio-electrical signals while waiting for
the arrival of a performer. I arrive, wearing the Leap Motion/Vive apparatus. Sitting
on a chair facing the plant, my tactile hand gestures are used to shift the frequency of
the bio-electrical signal (Figure 46). Agave Relay came about through a desire to
interweave digital augments, hand gestures and the bio-electrical signals from living
plants. I used tactile gestures, touching the leaves of the agave plant to modulate with
its bio-electrical signal, which was networked to the MIDI Sprout capacitive touch
sensor, and the Logic Pro X sound design programme. As I performatively interfaced
with this plant-body circuit, augments emerged in the display of the HMD, attached
to my hand gestures as I manipulated the agave’s leaves. These gestures – described
in the section below – modulate the bio-electrical as it is generated, and also cause the
digital augments to emerge. Investigating the material process of modulating a bio-
electrical signal as a mode of audio augmentation in MR, I generated a method that
posited an alternative formulation to the computer science /engineering/commercial
practice of layering a ‘realistic’ sound on top of a visual augment to give it a more
convincing virtual presence. Imbricating both data and signal, this software
assemblage requires that I co-compose using processes of recursion and modulation.

A series of techniques were developed to modulate the bio-electrical signal in tandem


with the digital augments. During performative interfacing, these gestures are applied
improvisationally, yet are also honed through a choreography designed to diffract the
signal itself. To generate open-ended modulation of the bio-electrical signal, I combine
different hand gestures at opportune moments determined in relation to the nuances
100

of the signal emission.

Fig. 46. Agave Relay, screen capture from video. L-R:HMD view/environment view.

Helpfully, the biological structure of the agave plant is suited to such tactile
improvisations, since its leaves are larger than the human hand and of a thick cellular
consistency that can withstand some manipulation. Below, I show via images the
gestures as different techniques. Two images are placed side by side: first,
photographic image of my hand shows the gesture enacted on the agave plant; and,
second, a screen capture documents the MIDI data generated by the gesture.

Technique 1. Holding the base of two leaves close to the attached electrodes with both
hands, generates a higher pitch (Figure 47), imaged as a flat line of tone that is held
over time (Figure 48).

Fig. 47. Technique 1. Image: Simon Howden Fig. 48. Technique 1. MIDI. Image: the artist.
101

Technique 2. Folding the base of a leaf backward and forward quickly (Figure 49),
shifts the pitch up and down over time, imaged as a ripple-like pattern (Figure 50).

Fig.49. Technique 2. Image: Simon Howden Fig. 50. Technique 2. MIDI. Image: the artist.

Technique 3. Holding the top point of the leaf between thumb and index finger, I drag
my hand downward toward its base (Figure 51). This pitch slide is imaged as a
downward diagonal line (Figure 52). Similarly, if I were to start at the bottom and drag
my hand upward, the pitch slide would follow in that direction.

Fig. 51. Technique 3. Image: Simon Howden. Fig. 52. Technique 3. MIDI. Image: the artist.

Technique 4. Holding the edge of the top edge of a leaf with one hand, and the base
with another, I flutter the top edge (Figure 53). This causes the signal to stutter (Figure
54).
102

Fig. 53. Technique 4. Image: Simon Howden. Fig. 54. Technique 4. MIDI. Image: the artist.

In this situation, where an agave plant becomes an instrument for generating sound
with technical objects whose parameters are open to the corporeal body, hand gestures
need to align with the sound generated by the emergent signal. Such alignment is not
pre-determined, but is in response to the sonics that emerge as a result of the micro-
impulses emitted by the agave, transposed by the MIDI Sprout sensor. The
performative interface created through the conjunction of agave, performer’s gestures,
data and signal networks, is in contradistinction to the way that more traditional
musical interfaces operate. For example, the mechanical interface of the piano is a
keyboard where each note activated is fixed at a designated pitch.16 In my performative
interface, however, pitch changes over time, and is not given in advance: pitch fluidly
shifts through a conjunction of material changes in the apparatus itself (the plant-
signal circuit), the plant’s biological processes, and the materiality of my gestures
applied to the plant’s leaves.

The cause of sonic emissions in plants remains a mystery. It has recently been
established that young corn shoots emit frequencies in the broad range of 10Hz -240
Hz when growing toward a water source which has been blocked off by a unpermeable
barrier. Researchers speculate they are reacting to the sound of the water, not the feel
of it, as was previously thought (Gagliano et. al. 2012:323-4). Yet, the specific reason
for this phenomena is a matter of debate. For example Gagliano et. al. state, ‘we are
growing increasingly doubtful of the idea that all acoustic emissions by plants are the
mere result of the abrupt release of tension in the water-transport system’: instead,
they suggest the clicks might be a form of plant communication (2012:324). However,
such debates are beyond the scope of this research.
103

Fig. 55. Screen capture from HMD with my hand under the avatar. Image: the artist.

As mentioned earlier, the holding gestures for modulating with the agave, not only
provoked sonic disturbances; they also caused the digital augments to materialize
from the Unity SDK via the Leap Motion/Vive headset, mapped to my physical hand
(Figure 55). As I modulated the bio-electrical signal from the agave, digital hand
avatars materially adapted to my body position, shuffling the coordinates of the
finger’s tip and joint values in the graphic models. At times, the low contrast of the
infrared signal made it difficult to discern the position of my physical hands, nested as
they were in the leaves of the plant when manipulating the bio-electrical signal. The
digital augments similarly provided little clue, since their ‘natural’ orientation had
been altered by hand models designed around a re-jointed orientation between fingers
and palm, confusing my apprehension of body and image. My fingers, pushing at the
base of the leaves or fluttering the tips, were largely occluded from the HMD view,
smothered by the purposefully re-jointed augmented hands (Figure 55 and 56). In
Agave Relay, the performer is not responsible for the actual flow of signal, or for its
continued passage, but is able to use tactility to insert disruptions throughout. Such
disruptions are the stuff of performative affect; qualitative insertions into the flow of
signal as it meets my gestures at the surface of the agave.
104

Fig. 56. Agave Relay, screen capture from HMD, with contorted avatar. Image: the artist.

During the performance, through a combination of digital and signaletic networks, my


body was partially materialised as signal and data, enfolded as an emergent
phenomena, shaped through performative intra-actions with the plant-signal circuit.
My bodily movements co-shaped the intra-actions of this software assemblage, but as
well, the data and signal network (diffractively, provisionally and partially) re-worked
my performing body. In the following chapter, through the performance piece Contact
Zone, the notion of a corporeality re-worked by the intra-actions of apparatus,
augments, and signals, will be further investigated. Such recombinations encourage
entanglement across both physical and digital topologies, where human and
nonhuman forces rhythmically generate a mix of realities.

Simon Penny (2009), has commented on the trend, beginning in the 1990s, to
categorise art exploring virtual worlds as ‘Virtual Art’ (such as Grau 2003). Penny
diverges from the neat historicity of this categorisation, noting that many artworks by
influential practitioners such as Raphael Lonzano-Hemmer, Perry Hoberman, David
Rokeby, Char Davies, Jeffrey Shaw and others, could also be discussed as critical
inquiries pertaining to embodiment. Penny states that ‘the substantial work of these
artists - which went largely unremarked in the hysteria of virtuality - was the
development of intuitive bodily interfaces to such worlds’ (2009:8). For Penny, virtual
worlds — far from being defined as virtual by computational attributes — articulate
105

‘the application of computational technologies to embodied, material, and situated


cultural practices’ (2017:389). Penny’s thinking is pertinent here because it
contextualises technologies of virtuality as relational to human practices, in situated
assemblages of material elements that generate unique experiences of embodiment.

I will now turn to a discussion of two important artworks that similarly deploy HMD’s
to enact border crossings between the liminal boundaries of the physical and the
digital. While the software assemblages of the Tactile Signal performances do not
utilise HMDs to produce immersive VR, nonetheless the artworks that I will
investigate similarly approach digital spaces as entangled with physical materialities.

Embodiment across digital and physical topologies

In the section that follows, I examine VR as it has been used artistically toward
developing a digital body that does not leave behind its corporeality, but rather exists
alongside and in a vibrant relation with its digital avatar as a multiplied embodiment.
My main focus is Adam Nash and Stefan Greuter’s Out of Space (2015)17 but I gesture
initially to Char Davies’ Osmose (1995)18 as a pioneering work in this area. Made
twenty years apart, both artworks advance a critical exploration of embodiment that
uses sensing apparatuses to introduce nuanced corporeal shifts into stereoscopic VR.
The cross-modal application of sensing to the primarily visual VR experience, situates
the interactant in a hybrid and multiplied modality of digital-corporeal realities that
cross thresholds rather than maintain dualities. Both artworks produce an experience
of VR as situated and emergent. At the same time, these experiences are not
completely interior to the interactant, since both use forms of screen
display/projection of the interactant’s experience to include an audience not directly
involved in the VR experience.

Char Davies’ Osmose locates the ‘immersant’19 in a web of different forms and levels
that are navigated using breath. Osmose consists of a series of virtual modules that the
immersant traverses20, while in standing position, wearing a custom-designed vest
measuring breath, mapping the virtual world’s parameters to physiological effects
such as inhaling and exhaling (Davies 1995; Davies and Harrison 1996; Davies 2002).
In the virtual environment, actual biological processes were transposed to digital
models, such as plant photosynthesis in the ‘forest module’ (Jones 1995:25). While
106

emphasis was on the immersive aspects of the data system, where immersants were
calibrated with the digital simulation through the embodied action of their breathing,
the material presentation of the work in the gallery space also incorporated an exterior
perspective. Audience in the space could see the activities of the immersant as they
navigated the virtual modules, shown as a shadow on a translucent screen placed in
front of their body at a human scale (Davies and Harrison 1996:27; Saffer 2008:162);
as well, projected in 3D on another screen in the space, the HMD view of the
immersant was available for the audience to experience (Davies and Harrison
1996:28). The work thus operated in some ways like a performance, where the interior
perspective of the immersant was modulated by exterior perspectives of embodied
action.

As several commentators have noted, the importance of Davies’ artistic contribution


to VR lies in its amplification of many senses rather than solely the ocular (Hansen
2001; Munster 2006; Penny 2009), although others have seen it as a denial of physical
touch (Rajah 1999; Fisher 1999). To privilege either physical body or digital body in
this multi-sensory network would be to diminish the affective power of Osmose, and
its potential to shift an immersant’s sense of their own body as it passed through the
virtual environment. The ‘end’ of Osmose is described by Mark Pesce (2000): ‘[there
was] a sort of near death experience, as they felt themselves drawn up and away from
the fleeting beauty of the world’ (252). This statement serves to highlight the affective
force of contact between virtual environment and immersant. The world of Osmose
generates hybrid digital-corporeal experiences that produce expanded perceptual
modalities of embodiment: immersants did not so much leave their bodies behind, but
instead made a virtual addition to their tangible corporeality by entering an affective
relation with data. In this multiplied state, embodied actions traced a meandering
path, felt in the moment, at various scales and thresholds, where data emerged
through haptic, optic, aural, and proprioceptive connections to the corporeal. Only
through an affective connection between these forces, did the VR experience emerge.

Out of Space (2015), by Adam Nash and Stefan Greuter, is a head mounted virtual
reality (HMD VR) artwork, where ‘each interactor / artist creates a unique virtual
work, unique to themselves and yet outside of themselves, in the world, virtually.’
(Everything is Data exhibition catalogue 2015).21 An interactant wearing the Space
Walk VR system (designed by Greuter and David Roberts), accesses the artwork’s
107

virtual world of data and begins to explore. Activated by the movement of the
interactant, virtual objects are called forth from the data system. Yet these objects are
not formed into graphic models, and there are no representational forms to act as
situated guides. Instead, they are visually abstract propositions that never reveal
figuration: this is data, as data. The artwork is formed, as an emergent event, through
the modulatory relations between the interactant’s corporeal body, the intensive
capacities of Nash’s bespoke data system, and the technical affordances that create
mobility in the SpaceWalk System. SpaceWalk is a system for embodiment in VR that
uses the Oculus Rift as a visual display. It pre-dates by about a year the commercial
availability of VR devices that use gestural interfaces in accompaniment with
stereoscopic vision (such as the HTC Vive). Greuter and Roberts note that their
intention behind the design of SpaceWalk was to design a ‘full body immersive virtual
reality platform [that] opens the door to Virtual Reality in small environments, such
as people’s homes, that is compelling, easy to setup and use’ (2014:1).

Data is apprehended by the audience via a projection system in the room: the HMD
view of the participant, projected at scale, operates as a performance for an audience
of bystanders who may soon themselves be participants. Data is relayed to performer
and audience in partial configurations, apprehended depending on subject position.
The data generated by the interactant is not only interior to their perception –
although obviously there is a different sense of embodiment for the interactant than
the audience. The performer cannot see their own body, only the phenomena
generated by their intra-actions with the data system; and, while the audience can see
both projection and performer, they have no access to the perceptual immersion of the
VR experience inside the HMD. In this way, inter-related subjectivities, as visual data
of the performance, operate alongside one another in the exhibition space. As iterative
and temporary assemblages of data (or digital matter) in movement, Out of Space
generates relations in an experimental event that cannot be determined in advance.
Without the arrival of the performer, there is no event, there is no data: the ‘art’ is in
the relation. This is not software as an executable series of commands with an already
determined conclusion, but a relational system of co-composition imbricating
software and bodies across a digital-physical topology.

Vaughan and Nash (2017:150) point to the critical combination of ‘performance’ and
elements of ‘liveness’ when attending to the archiving of digital performance artworks,
108

highlighting the difficulty of capturing those qualities in a recorded document.


Instead, they argue for a conception of documentation that acknowledges that ‘every
performance is unique in one sense and generic in another’: by ‘unique’ they mean
iteratively forming as data emerges from the computational system, and ‘generic’
because each unique data event is formed from the same executed file (152). An
adequate archiving method would need to include comments from
participants/players as to their experience during the performance, presenting these
accounts alongside the recorded document itself. Likewise, throughout this
dissertation, I have included my accounts of the experience of performative interfacing
in MR, in an effort to elaborate for the reader the specific affective or embodied
nuances that emerged through intra-action (of course, from my agentially real
perspective as a ‘human’).22

The alternate formulation of MR proposed by the software assemblages in this


chapter, has remained open to the emergent potentials produced by material forces in
the act of relation, as they entangle and re-configure during processes of performative
interfacing. Such an approach to materiality is vested in the recognition of the
capacities of different modes of matter, and the affordances those capacities offer.
Through the Wild Versions and Tactile Signal performances, I showed the specific
ways that processes of performative interfacing allowed the different capacities of
materials to emerge, entangle and co-compose with one another, manifesting new
intra-active phenomena that further questioned the restrictive version of MR set out
by the RV Continuum and supported by concepts such as the ‘presence metaphor’, the
clear ‘window’, and the ‘raw’ video signal.

This chapter touched on the notion of the ‘signaletic’ and allowed that to influence
strategies, methods, and techniques for performative interfacing. The use of infrared
signal as a ground for digital augments, engaged a mix of realities that were
dynamically co-shaped by the signaletic, whose disruptive materiality generated a
version of augmentation that further problematizes the informatic overlay approach.
In the Tactile Signal performances, I engaged a nuanced choreography of intra-
actions in order to co-compose the performance in a relational field that lures infrared
signal, digital augments, hand gestures, and bio-electrical signals into affective and
diffractive composition. Extending this direction in Contact Zone, I will be crafting a
performance that enfolds digital augments, human touch, a computational network,
109

various audio devices and hardware sensors, as well as an agave plant and a living
green wall to the gallery space. I will be co-composing with these elements, while
filming the performance for the audience in real time through the Leap Motion/Vive
apparatus. This entangled performing/filming position will problematize the role of
performer, while the head mounted apparatus will continue to challenge and elaborate
my own sense of embodiment: shortly, enacting practices that negotiate with
nonhuman affect, the software assemblage will also generate new senses of subjectivity
and embodiment.

1 For example, Smithson published a series of photographs of the Yucatan Mirror Displacements
1-9 in Artforum with an accompanying essay called “Incidents of Mirror-Travel in the Yucatan”
(1969).
2 The SpeedTree Modeller is a proprietary application made by Interactive Visualisation Inc. and

widely used to create virtual vegetation for cinematic and game design. Retrieved from
https://store.speedtree.com. (accessed 8 April 2017).
3 In terms of the application of the models to the data system, the way that the Leap Motion SDK

computationally executes the hand objects is divided into two structures: a graphics model and a
physics model, the former producing the appearance of the hand, and the latter the anatomical
mesh. In the Wild Versions, the graphics models were designed around the types of living
vegetation found in the performances’ various locations, while the physics model is based on the
human hand. The plant images texture the mesh of the physics model, forming a loose wrap
around the hand that flows into screen space. In the design of the graphic texture, gaps were
included to abstract, to some degree, the figurations of the hand, and there was an emphasis on
the idea of causing the leaves and branches to partially wrap the hand mesh.
4 Refer to Appendix 1, for accompanying video documentation of the Wild Versions to apprehend

this phenomena.
5 They distinguish between viewing an object in the real world with the naked eye (direct viewing)

and viewing an onscreen object (non-direct).


6 The first HMD to be supported was the Oculus Rift (2017), with the HTC Vive added later that

year. I use the HTC Vive for my research. Retrieved from https://www.vive.com/au/product/
(accessed 4 February 2018).
110

7 Officially this just works with Oculus Rift, but I have had no issues operating with the HTC Vive.
https://developer-archive.leapmotion.com/gallery/oculus-passthrough (accessed 10 April 2018).
8 The hardware device used here for capacitive touch sensing is the MIDI Sprout. It uses non-

invasive electrocardiogram (ECG) electrodes attached to the surface of plants to transmit an


electrical current from plant to device. Retrieved from https://www.midisprout.com. (accessed 11
April 2017).
9 When selecting VST’s, my intent was to accentuate the polyrhythms manifest in the voltage.

Each VST was modified, by the addition of reverb, delay and perhaps envelope parameters such
as oscillation. My tendency was to select drums rather than pads, and to think closely about
timbre in the context of the tones emitted. Each VST was equalised and compressed in relation to
the overall sequence, with gain adjusted to bring notes with less velocity to alignment with those
with more.
10 To positionally track the user’s head in physical space, the HTC Vive uses a system connected

to two base stations that emit lasers, which are triangulated with a sensor network placed on the
head mount. Differing from the earlier wave of VR devices, the stereoscopic vision of the Vive is
accompanied by (optional) hand held controllers that sense the location of the user’s hands and
triangulate that position with signal from the laser base stations and sensors attached to the
HMD itself (Niehorster, Diederick, Li and Lappe 2017).
11 Pieces for Plants has had numerous public performances and iterations, including at the

Lincoln Center Out of Doors Festival and The Lab in San Francisco. Pieces for Plants was first
performed at the Chapel of the Chimes in Oakland, California in 2001. Retrieved from
http://miyamasaoka.com/work/2006/pieces-for-plants-gallery-installation/ (accessed 19 July
2018).
12 Retrieved from http://miyamasaoka.com/work/2006/pieces-for-plants-gallery-installation/

(accessed 19 July 2018).


13 Phenology is ‘the study of recurring plant and animal life cycle stages, especially their timing

and relationships with weather and climate’ (Schwartz 2003:1), so phenological time in plants is
based on the biological rhythms of the seasons: the equivalent form of ‘human’ time being
durational.
14 Documentation for this technique can be found here: https://docs.unity3d.com/Manual/class-

Mesh.html
15 Documentation on this technique can be found here: https://docs.unity3d.com/Manual/class-

LineRenderer.html
16 For example, the fifth A (called A440) on an ideal piano is tuned to 440Hz: when you watch a

piano tuner at work, this is the first frequency they demarcate in the system, with the other
frequencies being divided across the remaining 87 keys (Reblitz 1976) so that the system is
considered “even tempered” (Fischer 1975:98). Subtending the instrument to a particular tuning
method, which is universally accepted as correct (prepared piano experiments by John Cage and
others aside), means that a tuned piano always expresses the same spectrum of frequencies.
17 Exhibited at Everything is Data August 14 – September 26, 2015. NTU ADM Gallery 2,

Nanyang Technical University, Singapore.


18 Osmose was first exhibited in 1995 in Montreal, Canada, at ISEA, the 6th International

Symposium on Electronic Art.


19 ‘Immersant’ is the specific nomenclature Davies uses to describe the role of the interactant or

participant in her work (cf. Jones 1995:25).


20 According to Jones these modules include ‘nearly a dozen virtual worlds in which the user

explores and becomes a part of: a world of text and literature through a fog; a forest; a clearing; a
pond; a leaf; one can journey inside the ground; into an abyss; into a world of lines of code’
(1995:24).
21 Retrieved from http://gamedesignresearch.net/out-of-space/ (accessed 4 October 2016).
22 Like Out of Space, all my software assemblages are from ‘generic’ files, that iterate differently

with each successive performance. The versions of the performative interfacings recorded in this
dissertation then, are only one potential version.
111

CHAPTER 4

Augmented materialities at the edge of control

Interweaving materialist and critical posthuman conceptions of matter, organization,


and corporeality, I have generated software assemblages that entangle a multitude of
elements as they co-emerge. These elements are drawn from three core material flows:
the computational, the corporeal, and the organic. Inflecting Barad’s agential realism
with Deleuze and Guattari’s machinic assemblage’, applying Haraway’s ‘companion
species’ approach to living plants, and coupling DeLanda’s radical conception of space
with the software assemblages' hybrid media-ecological designs, has generated an
understanding of augmented materialities as potentialized, dynamic, and intra-active.
Recognizing the affective potential of the nonhuman, afforded a position to interrogate
the performative potential of augmented materialities, away from the confines of
computer science/engineering/commercial approaches to MR.

Rather than conceiving a situation where digital augments are executed by a human
‘user’ as a screen-based interaction, this research has nurtured the idea that both
digital and signaletic materialities emerge through intra-action. Furthermore, the
contingent form of their emergence is actually mutually co-constituted through their
‘entangled agencies’ (Barad 2007:33) with other modes of matter. A persistent refrain
has been the notion that MR emerges off screen as well as on. While it is a technical
necessity that the screen be the primary site where digital materialities are displayed,
my approach to MR pays attention to phenomena that bursts into physical space as
well. In this final and concluding chapter, I discuss the culminating software
assemblage for this research, Contact Zone.1 Thinking with diffraction – a concept I
borrow from Haraway and Barad – as an artistic strategy, I will aim to articulate
performative interfacing in MR, as enmeshed with the intra-active relations between
various phenomena born of plants, corporeality, signal, and code. These phenomena
are generated across, by, and through, different kinds of matter and materialities.
112

Contact Zone will investigate the diffractive movements of performative interfacing –


such as those that resonate with and respond to data and signal – by human and
nonhuman actors. In concert with this analysis, is an examination of certain influential
artworks that similarly use plant energies to re-figure human-nonhuman relations in
gallery spaces. Focus will turn to Christa Sommerer and Laurent Mignonneau’s
seminal installation Interactive Plant Growing (1992) 2, as well as Gregory Lasserre
and Anais met den Ancxt’s Akousmaflore (2007-present)3, which likewise engage
tactility to investigate the artistic potential for plants and humans to co-compose
together. Before I lean toward the implications of plant-human-data in Contact Zone,
a further investigation of diffraction will expose some of the issues at stake for the
software assemblage's re-working of MR.

Diffraction, intra-action, and boundary-making practices

For Donna Haraway, diffraction deploys a different optics than a reflective looking,
which seeks to see only the same: ‘diffraction patterns are about a heterogeneous
history, not originals. Unlike mirror reflections, diffractions do not displace the same
image elsewhere’ (Haraway 2000:101). In my research, interference patterns enacted
by diffraction are investigated as entry points for thinking/experiencing material
relations differently. For example, the visual design of the plant-hands that have
seeded their way into several of my software assemblages, de-form the human hand as
they engage semblances of ‘leaf-ness’ oscillating across data, suggesting a more tactile
approach to virtual onscreen space. Yet, this likeness to leaves is not an imitation or
mirroring of a leaf, but a diffractive approach, where the human hand disperses across
physical and digital spaces as well oscillating with plants to co-compose.

For Barad, diffractive approaches aim to ‘produce a new way of thinking about the
nature of difference, and of space, time, matter, causality, and agency, among other
important variables’ (Barad 2007:73). To move beyond the geometry of optics, Barad
conceives diffraction as – first and foremost – a material process that sidesteps what
she terms the ‘self-referential glance back at oneself’, inherent to reflective methods
(88).4 The disturbances caused by diffraction generate patterns of interference that
are performative and entangled, not representational or analogous (88). Barad also
notes that the diffractive movements visible in (for example) ocean waves, also hold at
the smallest scales, such as in the movement of electrons. For example, the Davisson -
113

Germer experiment (1927), found that under certain conditions, electrons shot
through a vacuum would produce both a wave pattern and a particle pattern
(2007:82). Previously, it was thought that electrons either travelled in waves or
particles, but not both under different conditions. This is the core of Barad’s point in
relation to a diffractive methodology, socio-culturally applied: changing the conditions
that cause apparatus and matter to intra-act as they do, can produce radically different
results for the ‘marks’ they leave on bodies. Her re-working of diffraction – via an
understanding of the vibrancy of matter operating at many scales – is highly applicable
to media art, since it assists in situating entangled forces such as signal and data as
they iterate through different configurations of apparatuses.

Through Contact Zone, I will be articulating various diffractive processes that occur as
a result of the intra-actions produced out of an ecology of plants meeting the
electromagnetic spectrum (infrared signal), meeting sonified inaudible frequencies
(ultra-sonic noise), meeting custom designed software (digital code), meeting my
tactile and fluid human gestures (the performing body). Before turning directly to
Contact Zone, it is important to situate my work in the context of some of the different
artistic mobilisations of plant ‘energies’ that have tried to rethink the material
relations made possible by re-assemblages of the technical and organic.

Natural ecologies in art, theory and culture

Investigating the various approaches that artists have to the energy coming from
plants, reveals a range of approaches from the aesthetic and sculptural, to the
conceptual and even genetic. Art that connects plants with media technologies
emerged as a notable preoccupation in the 1970s, when influential pieces such as Nam
June Paik’s TV Garden (1974) arranged plants as sculptural elements in a media
environment. The practice of using plants as sculptural objects in a fixed spatial
arrangement, is continued in contemporary art, where various installation artworks
by Olafur Eliasson — such as Forked Forest Path (1998) — chart a visitor’s trajectory
through the gallery amidst organic structures. Artificial environmental realities that
turn interior gallery spaces into scaled-down exterior environments, enfold the visitor
as a relational component in a hybrid ecology.

From the 1970s, seminal compositions for sound performance such as Child of Tree
114

(1975)5 by John Cage, sonified the bio-electrical signals from plants as musical
instruments that were ‘played’ by a performer. Crucial here was the idea that, when
playing these loosely scripted pieces, ‘the focus is on carrying out the action itself,
regardless of consequence’ (Johnson 2003:504). Cage’s sense of music as an action
that negotiated durational time went against metrical conventions inherited from
Western classical music, where music was performed in determined time signatures.
At around the same time, Richard Lowenberg and John Lifton used the ‘gold needle’
technique6 to measure the bio-electrical signals from plants, transferring those signals
to electroencephalography (EEG) devices worn by human interlocutors.7

During the same period, John Baldessari used a Sony PortaPak to make the
provocative conceptual video Teaching a Plant the Alphabet (1972).8 A response to
Joseph Beuy’s performance How to Explain Pictures to a Dead Hare (1965),
Baldessari’s video staged the absurd situation of a human trying to teach a plant to
read, a comment on the ‘hippy’ generation’s desire to communicate with plants.
Baldessari states:

I thought conceptual art at that time was too pedantic. There were many ways
artists used language, so why not try some other way? … Teaching a Plant the
Alphabet was done during the hippy times. There were books about how to
communicate with your plants. I thought, okay, I guess I’ll start with the
alphabet and then we’ll talk … . (Baldessari quoted in Morgan 2009).9

The brutal black humour in Baldessari’s teaching ‘method’ toward his plant-pupil,
helpfully illustrates the differently perceived thresholds between human life and plant
life. Baldessari’s critique is levelled at the tendency in some art of the period, to
attempt to equate the signals coming from plants and transmitted to humans as an
intentional form of communication with humans by plants. That plants use chemical
signals to communicate with one another as well as with predator species in the
specific context of their local ecology, is an emergent research trajectory explored by
evolutionary biologists using detailed sonic and visual imaging techniques (Trewavas
2005; Ferrari, Wisenden and Chivers 2010; Gagliano 2012; Gagliano and Renton
2013). Yet, still other researchers point out that since the accuracy of imaging the
inside of a living plant is unreliable, little is known about the specific cellular
configuration that motivates chemical processes, and while isolated data is collected
115

and interpreted, an overall systemic understanding of plant signalling is still lacking


(Bögre and Beemster 2008). Care needs to be taken then, in media art research such
as mine, when discussing the idea that plants emit signals directed toward other actors
in their surroundings. My artistic research is clear that the electrical signal’s
transmitted by plants – that I co-compose with in the two Tactile Signal performances
as well as in Contact Zone – are not overtly communications directed at human bodies.
Nonetheless this does not prevent a form of co-composition on my part to emerge as
part of the software assemblage.

According to Prue Gibson (2018), a shift is taking place in artistic practice engaging
plants, where the organic realm is no longer treated as primarily aesthetic material.
Rather, artists like Natalie Jeremijenko (Gibson 2018:166) and Eduardo Kac create
artistic projects that communicate the idea that plants have their own particular
agency.10 Combining both issues of genetics and signal, Laura Beloff and Jonas
Jørgensen nurtured a community of Danish Nordmann Fir trees that had been cloned
from the same biological stock (Beloff and Jørgensen The Condition 2015-2016).
Placing them in rotating boxes designed to negate the effects of the earth’s
gravitational pull on growth cycles, their inquiry sought to probe ‘futuristic
speculations on the possibility of plant societies living under radically different
conditions’ (Beloff and Jørgensen 2016:19). Artworks such as those by Beloff and
Jørgensen, engage with processes of plant-signal transduction, which examine the
capacities of a plant’s calcium sensing system.11

In the seminal installation, Interactive Plant Growing (Sommerer and Mignonneau


1992)12, participants were able to touch real plants and precipitate the on-screen
growth of up to twenty-five species of digital plants. The screen ‘growth’ was only
activated if participants found the right combination of tactile micro-gestures.
Arranged in a semi-circle in pots adjacent to a large screen, plants became interfaces
that enact computational forms of growth that suggest artificial life (Sommerer and
Mignonneau 1992:1; Whitelaw 2004; Gatti 2009:6). Participants needed to activate
the ‘interactive plant’ soliciting sonic modulations that would cause different varieties
of digital plant to emerge on screen. A careful interaction would reveal ‘growth’ in the
real plant’s digital avatars, including increases in health, branch and leaf growth.
Conversely, a careless touch could promote a negative descent for the avatars to a weed
or invasive plant. The artists describe the experience for the participant as one of
116

thoughtful exploration where they must tune to the plants:

Since it takes some time for the viewer to discover the different levels for
modulating and building the virtual plants, he will develop a higher sensitivity
and awareness for real plants (Sommerer and Mignonneau, 1992).13

Sommerer and Mignonneau’s work places plants as active elements of the installation;
they have some agential materiality with a participatory audience. The tuning that
participants must develop to encourage the plants to flourish, is entirely different from
art that uses plants for passive aesthetic purposes. On this point, John Ryan (2015)
provides a useful summary of the shift toward approaching plants as co-composers:

Whereas visual plant art, tactile plant art, and plants-as-art form exact degrees
of representation or manipulation, plant-art produces a flux of meaning
iteratively between the plant, artist, audience, and artwork in sensory contact.
This flux is the basis of the co-becoming between us and other, between nature
and technology, between the vegetal and digital, and is a salient mark of plant-
art (Ryan 2015:54).

Such artworks not only question the idea of plants as passive objects, but extend
human-plant relations to recognise their situated and embodied modes of agency.
Moreover, as manifest in Sommerer and Mignonneau’s statement above, tactility
emerges as a strategy that might afford a richer interrogation of plant-human relations
beyond visual aesthetics.

My own relationship with plants is culturally entangled with my Māori ancestry, where
the organic kingdom is considered to coexist in a radical cosmological contingency
with humans (Reed 1963). Explicated through the concept of mauri, which is the idea
that each element of the natural environment has a ‘life force’ (Pohatu 2011), new
configurations of care and mutual ethical responsibility emerge through kinship links
between human and nonhuman actors (Royal 2003:95). These nurture familial
relations of care or kaitiakitanga (Barlow 1991). Episodes of material transferral and
transmutation from cosmology are numerous, such as the figure of Tane Mahuta, the
symbolic man-tree of the Waipoua Forest who, in human form, brought light to a dark
universe by pushing his parents (the Sky and Earth) apart, creating the conditions for
the material world to flourish. Such cosmological understandings help articulate
117

relations outside of humanist (and Western) paradigms that have artificially separated
nature and culture, a paradigm artists such as Kac also revoke at the genetic level.

Haraway points to the connections between humans and plants, where an


acknowledgment of shared genetic matter can assist in the re-assessment of new
situated potentials for inter-species contact:

I am fascinated with the molecular architecture that plants and animals share,
as well as with the kinds of instrumentation, interdisciplinarity, and knowledge
practices that have gone into the historical possibilities of understanding how I
am like a leaf. (Haraway 2000:132)

Haraway’s postulate is that nonhuman animals and organic matter should be


respectfully elevated from a passive role as pets, tools, or material resources14, to status
as a ‘companion species’ to humans (Haraway 2007). In the research that follows,
inflecting Haraway’s inter-species thinking to include plants (Davis 2011:46; Gibson
2018:140), opens a pathway to re-consider them within artistic contexts as a
companion species. Artists working with plants as signal producers, have for some
time been thinking with Haraway's postulate. This point was underscored with
reference to Miya Masaoka’s work Pieces with Plants (2001-2), examined in chapter
3, and as well it resonates with Sommerer and Mignonneau's statement above. In
another example, Gregory Lasserre and Anais met den Ancxt’s Akousmaflore (various
iterations, 2007-present), blends signal sensing from plants with human touch.

Arranged in a gallery in hanging baskets, a grid of plants await a human interlocutor.


The plants that compose Akousmaflore are fitted with specially designed sensors that
are in turn connected by MIDI to sound design software. Sensors allow the detection
of bio-electric potentials in plants generated by ions (Ando et al. 2011), potentials that
are converted chemically by the plants physiology to an electrical current (Fromm and
Lautner 2007). As a participant’s hand touches the plants, the electrical field of the
human body contacts the electrical charge harnessed from the plant: this collision is
materialised as sound. In artworks such as Akousmaflore some of the plant processes
captured include the conversion of light to chemical energy in photosynthesis, and the
selective absorbance of water to ‘guard’ cells that swell and place pressure on stomata
triggering osmosis. These signal transductions and more, come into play to produce
118

the sonic emissions that fill the gallery space, leading to the impression a ‘plant
singing’.15 Contact with the plants via human touch, occurs through the meeting of
human and plant electromagnetic fields, and this contact produces the sound in the
installation. In the previous chapter, we examined the particular material
arrangement of the Tactile Signal performances, where bio-electrical signals from
plants were experimentally deployed as sonic augmentation. In Contact Zone I build
on this prior experimentation, by further interpolating plant signal into relays with
digital augments, and then modulating those using my body.

Nurturing a bio-system for generating art

Part of my software assemblage research has been to nurture my own bio-system, a


living ecology of specially selected plants that are available for performative
interfacing in MR. Housed in a 1.2 metre high x 1 metre wide green wall (Figure 57), is
a diverse ecology composed of eighteen pots, containing thirty-two individual plants.
Additionally – for the visitor-led experience – there is a row of bird of paradise plants,
and a large Agave Attenuata that emits a bio-electrical signal, captured during the
entire performance of Contact Zone. Together, these plants form a bio-system which I
have cultivated since November 2017. Some plants have perished along the way, while
others, unhappy with green wall life, have been moved to more appropriate conditions
in my backyard. Following Haraway’s account of the need for greater care in regard to
inter-species contact, I have approached this ecology not only as a selection of artistic
materials, but also with the idea in mind that plants could be considered as bodies
(Hall 2011; Marder 2013). Considering a plant as a body guides one in a process of
nurturing rather than growing to produce a resource; for example, for it to be an
interface in my exhibition. It also involves a responsibility of care even after the
exhibition concludes; that plants in pots be planted out either in a more nutrient rich
garden environment, cared for indefinitely in green walls, or gifted to my local
community garden.

In the Contact Zone exhibition, this ecology is transported from the post-studio reality
of my backyard and front porch, to the less vibrant interior of the gallery space. A
different proposition for mixing realities is generated inside the gallery; one that
conjoins the two ecologies of media and nonhuman organic matter. As a practice,
augmenting biological environments thinks with human and nonhuman relations as
119

entanglements.16 Exploring performative interfacing across a living ecology adds


further nuance to the notion of diffraction: plants have their own generative processes
that are unpredictable and mysterious, which will be harnessed shortly as augmented
materialities.

Fig. 57. Green Wall Panel on my front porch, March 2018. Image: the artist.

Contact Zone: overview and design

Contact Zone nurtures the emergence of human and nonhuman matter, as a ‘dynamic
relationality ... being attentive to the iterative production of boundaries, the material-
discursive nature of boundary-drawing practices, the constitutive exclusions that are
enacted, and questions of accountability and responsibility for the reconfigurings of
which we are a part’ (Barad 2007:93). Alongside this thinking, the technical and
expressive design of Contact Zone proceeds from the concept of iteratively re-
assembling many of the previous elements of this research so far.
120

The design of Contact Zone composes a situation where plant bodies resonate with
human bodies and technical devices, enmeshing signals and data as augmented
materialities. It operates in two main parts: firstly, an unstructured visitor-led
experience, where participants can explore the nuances of their hand gestures as they
arise in tandem with a digital avatar, activated alongside plants whose bio-electrical
signals have been sonified; and, secondly, a performance by myself lasting
approximately 15 minutes, utilising the techniques for performative interfacing
discussed throughout the course of this research. While technical details such as the
arrangement of apparatuses, the modules that compose the software, or the placement
of materials in the physical environment, are determined by myself in advance of the
exhibition, the generative and transient relations that coalesce via the entangled
emergence of signal, data and corporeality, emerge as they do on the day. Manifested
by the material-discursive boundary-making practices of agentially real yet nonhuman
entities, several processes that I performatively intra-act with – such as the bio-
electrical signals from plants and the ultra-sonics that create feedback in the room
environment – are indeed quite unstable modes of matter/signal.

A recurring process in my research trajectory has been the re-working of the Leap
Motion gestural controller as a device that is open to more performative modes of
interfacing than was intended by its industrial/commercial designers. Contact Zone
actually inflects the handheld and head mounted permutations of the Leap Motion,
described in the previous two chapters (see Figures 60-62). Folding the two
permutations together in one performance requires I negotiate processes that manage
demanding corporeal movements: as observed in chapter 2, my hands improvise
micro-gestures as they come into contact with plants and their signals: and, as
articulated in chapter 3, my head is the ‘camera’, and it must frame a continuously
tracked point-of-view shot as I trace a pathway through the gallery space. Elsewhere,
I have intra-acted performatively with digital hand avatars using micro-gestures. In
software assemblages such as Tactile Light, Tactile Sound and the Wild Versions, I
watched the screen emergence of the digital hand avatars and adjusted my hand
position and orientation to account for that movement. In that work, I spent much
time attending to the choreographic role of hand gestures during intra-action. Here,
however, my whole body is involved. Before tracing the trajectory of my solo
121

performance, I will unpack the visitor-led encounter, as this will unfold prior to my
elaboration.

Entering the room, visitors to Contact Zone encounter a row of bird of paradise plants
arranged in front of a large LCD screen (Figure 58). Attached to their stems are
ultrasonic microphones as well as a MIDI Sprout sensor, both using different
techniques to capture human inaudible frequencies. Next to the plants is a Leap
Motion gestural interface, the device that will activate the Unity system and send
digital augments to the screen. The screen itself is showing a webcam image stream:
the camera sensor is pointed in the direction of the visitor, but as well, it captures the
environment of the room. In this experience, two parallel processes are manifest:
firstly, picking up the gestural interface, participant’s experience their hands emerging
alongside a digital avatar with which they can co-compose; secondly, by using tactile
gestures on the leaves of the real plants, a sonic frequency shift can be activated, and
thoughtful hands may be able to modulate the plant’s signal. Open to the emergent
potentials of hand gestures in relation to the digital as well as the agential intra-actions
of plant multiplicities, the visitor-led experience allows participants to encounter – on
a more intimate scale – some of the primary materials and processes deployed in the
performance to follow. Visitors are given time with the reactive plants and the gestural
controller before my performance commences.

Fig. 58. Contact Zone Visitor-led experience showing reactive plants, webcam view on screen, and
visitors during intra-action. Installed at Black Box, 19-23 November 2018.
122

Performing with augmented materialities

Stacked four tiers high with potted plants, is a green wall illuminated by the glow of a
studio light. At its side, two LCD screens are currently black. Soon, one screen will
burst forth with the mediated flow of an infrared signal, sent from the Leap Motion/
Vive apparatus, worn on the head of my performing body. At that moment, my hands
will be pressing the leaves of a living agave plant, working with its bio-electrical
emissions in an attempt to modulate that signal. Emerging in tandem are three
operations: the infrared signal with enfolded augments, passing from Unity to the
HMD to the LCD screen; the bio-electrical signal emitted from the agave, passing to
the MIDI Sprout then to Logic Pro X; and, the hand gestures and shifting body
movements I will be using to modulate the agave’s signal, and at the same time
articulate the augmented infrared signal. Tuning to the bio-electrical signal from an
agave plant, I modulate with its sonics, using the same basic format and gestures as
the Agave Relay performance in chapter 3. Digital augments emerge in tandem with
my hand gestures, as I shift the bio-electrical signal – operating as augmented audio
– coming from the agave plant (Figures 59 and 60).

Fig. 59. Contact Zone video still. Hands modulate the agave as augments emerge in tandem on
LCD screen (LHS). Image: the artist. Full video documentation available:
https://rewawright.com/2018/11/28/mixed-reality-with-plants-data/
123

Fig. 60. Contact Zone, ‘agave modulation’ segment, HMD view, infrared signal captures my hand
( far left) as well as augments. Image: the artist.

When this process ends – about 6 or 7 minutes later – with the agave still bubbling its
sonified signal in the ambient space of the gallery, I move toward the green wall. Still
wearing the HMD, adapting to my new perceptual orientation, I move with slow and
careful gestures.17 Forces such as the weight or proportions of headset, and demands
such as the need to be mindful of cables as I move, will also influence my movements.
How I spatially position my body, will influence the digital augments and bio-electrical
modulations (Figure 61). With my vision limited, my bipedal capacities are necessarily
tentative: accounting for this new embodied perception, it would be unwise to rush.

Fig. 61. Contact Zone. HMD view as performer moves from agave to green wall. Image: the artist.
124

Taking a minute or three to reach the green wall – only a matter of metres away – I
pick up a second Leap Motion interface, configured for handheld performative use and
connected to a second, parallel, computational network from the one that triggers the
Leap Motion/Vive apparatus (the HMD).18 The second LCD screen erupts with an
image stream (Figure 63). However, since my new infrared ‘vision’ transfers only the
electromagnetic radiation present in the room – and pixels on an LCD screen do not
emit heat – I cannot see the results of my intra-actions with the second Leap Motion
interface. Cut-off from the wider scene by my new infrared vision, my gestures must
respond to sound and touch as cues. In this phase of performative interfacing, the
feeling of being in my body guides the gestures I make, allowing tactility and a sense
of embodied movement through space, to take on an amplified role.

Touching the green wall, I aggravate ultrasonic microphones, causing analogue relays
that squeal and feedback in the room (Figure 62).19 Mixed by a sound technician, these
relays are attached to sound design components such as reverberation and delay,
convolved with the MIDI sprout signal from the agave.20 Two sonic signals are now
blended together – from the agave and the green wall – yet I have influence over only
one (via the second handheld Leap Motion). During this phase of performative
interfacing, multiple material flows (digital, signaletic, corporeal, organic) are
conjunctive and contingent. Augmented materialities intra-act together to generate
multiplied phenomena that emerge in tandem, omni-directionally. For example, at the
moment infrared signal meets digital augments there are also physical gestures, bio-
electrical signals and ultrasonic frequencies, all circulating in relays that intersect and
overlap. Processually, augmented materialities augment one another. Circulating
through networked arrangements, the augmented materialities in Contact Zone are
recombinatory entities that recursively combine signal and data, with corporeality.

Disrupted by signal, data, and noise – both visual and sonic – this relational system
places extra demands on my cognitive processes. Operating across different registers
of time, and consumed by a relational field of distortion and interference, my tasks are
multiplied. For example, I must frame the HMD with attention to the placement of
augments, improvise with the sound emitted by the plants, moderate my agitation of
the ultra-sonic disturbances, and improvise my gestures in alignment with the
environment, considered as a relational field of movement. With my perception
defined by the distorted view of the infrared signal, and my body restricted
125

physiologically by the Leap Motion/Vive apparatus, my role as performer is


problematized: ‘seeing’ through the infrared signal, my visual awareness of the room
is only partial. Without colour my brain has trouble discerning the depth of three-
dimensional forms. Due to an effect of parallax in the HMD frame, the room seems
warped at the edges. As noted in the previous chapter, the digital augments transpose
to the screen display at the refresh rate of 90Hz, however the infrared signal is low
resolution: the resulting mismatch makes perception even more challenging, and
problematizes my HMD viewport further. Another part of perceptually adjusting to
infrared vision is accounting for the flashing sensors of the HTC Vive’s two laser base
stations. These flashes are constantly entering my view. Entangled with this
conjunctive apparatus and its causal effects, my sensorial boundaries that have been
re-drawn: my regular (agentially real and embodied) perception is diffracted by the
data-signal network.

Enmeshed together in a singular flow drawn from a multiplicity of affective


occurrences, augmented materialities draw and re-draw boundaries, as they pass
their energies through this mediatic-vegetal-computational environment. Bursting
with entangled trajectories, this software assemblage is a zone of material contact.

Fig. 62. Contact Zone environment. L-R: LCD screen, performer, green wall. Image: the artist.
126

This thesis has suggested that to adequately investigate MR as an artistic practice we


must also investigate the mixings of reality and the virtual that emerge in physical
space, that is, off the screen. Accounting for the emergence of MR as a multi-sited
occurrence, involves recognizing that augmented materialities have the capacity to
unfold in physical world space as well as digitally in screen space. Through the
generation and analysis of phenomena that emerge and entangle in physical space –
such as micro-hand gestures, signaletic sonics, and corporeal actions – this research
has attempted to cultivate a more balanced view than the screen based version of MR
that sees augments as only an informatic overlay, a view that privileges the digital as
the site of a MR experience.

Fig. 63. Contact Zone. LCD screen installation view showing augments produced by head-mounted
and hand-held Leap Motion devices. Image: the artist.
127

Fig. 64. Contact Zone. HMD screen capture from Leap Motion/Vive apparatus. Temporally
synchronous with Figs. 62 and 63. Image: the artist.

Temporality, intra-active phenomena, embodiment

Another way we might approach this software assemblage is by examining its


differently expressed modes of temporality. Contact Zone engages three temporal
registers: the micro-temporal register of machines, the phenological register of plants,
and the durational register of the human body. In the micro-temporal register of
machines, decisions by software operate outside of the threshold of human perception,
yet nonetheless influence human movement and action. For example, the gestural
hand movements that modulate the agave and agitate the green wall are tracked,
micro-temporally, by a computational system. A computational layer sits on a
corporeal layer, as I intra-act with digital augments at the same time as modulating
the bio-electrical signal. My hands are both generative and responding to the nuances
of the digital and signaletic augmented materialities. In the third phenological register
of plants, organic matter elicits a human response via signal. Yet, while the plants are
not contributing to the assemblage in the same way we would ascribe to human gesture
or action they are nonetheless affecting and being affected. Across these temporalities,
material phenomena (data objects, digital augments, infrared signals and so forth)
activate one another through intra-actions, where the digital materials of
128

computational networks, tangle with the fleshy material of the body, and the slower
paced agential realities of living plants.

These different registers of time might be thought as parameters or thresholds from


which to apprehend a different understanding of materiality. Barad explains the
relation between intra-actions and temporality:

Intra-actions are temporal not in the sense that the values of particular
properties change in time; rather, which property comes to matter is
re(con)figured in the very making/marking of time (2007:180).

Partially and contingently, matter becomes what it is because of an intra-actively


enfolded relation with not only the apparatuses that materialise it, but also with time
in all its variances ( durational, phenological, chronological, micro-temporal, and so
forth). The intra-actions that produce time and matter in Contact Zone, are recursively
produced by those same forces. Differentiations between ‘which property comes to
matter’ are enfolded in intra-action as a ‘making/marking of time’. Engaging with
these differently materialized temporal flows means remaining open to time as a series
of material encounters. For example, in the ‘agave modulation’ segment of the
performance, my ear must find an entry point for my gestures, so that they might insert
tactile movements into the existing sonics transduced from the plant.21 Perhaps
placing my hands at the base of a leaf to shift the bio-electrical signal, or at the tip to
cause a stutter, I am open to the potentials of the co-composed materialities that might
take shape. For example, the agave plant will not always emit sound. Its signal is
punctuated by pauses, which cause me to similarly wait.22

As augmented materialities iterate in the risky environment of Contact Zone, infrared


signals distort, ultrasonic frequencies squeal, screen augments stutter, and plants
vibrate in a bubbling cacophony: all these forces and more re-draw the material
boundaries of this software assemblage. Iteratively assembled, MR in this mediatic-
ecological environment is multi-sited across screen based and physically spaced
topologies. The art is in the relation, and the relation does not unfold the same way
twice.
129

Through these relations with other orders of temporality – as well as the perceptual
impact of the spatial phenomena discussed earlier – performative interfacing
implicates me in an alternate reality that exceeds my everyday human embodiment.
We might call this new mode of embodiment a critical posthuman performance
modality. Experiencing the performance via the infrared signal and under the physical
constraints of the HMD, I am acutely aware of this apparatus as instantiating
boundary-making practices that shift my sense of embodiment from human to
something else. That something else, however, is not an enhanced posthumanism. As
has been argued throughout this research, the notion of aligning with a computer
simulation to enter an enhanced state of immersion, also allies itself with the idea of
leaving the corporeal body behind. N. Katherine Hayles has shown that a view that
‘configures human being so that it can be seamlessly articulated with intelligent
machines’ (1999:3) was a popular theme of second order cybernetics. Interrogating
cybernetic narratives that would separate informatic from the human body – such as
Hans Moravec’s Mind Children (1990) where it was tendered that human
consciousness would eventually be uploaded to cyborg bodies – Hayles questioned the
notion that the corporeal body might be replaced by an enhanced posthuman form of
physicality (1999:1). Moreover, she argued vociferously for the need to posit
'interventions ... to keep disembodiment from being rewritten, once again, into
prevailing concepts of subjectivity' (1999:5).

Likewise, Nicole Anderson (2017) discusses the conception she terms the
‘transcendent posthuman’, popularised more recently through the writing of Ray
Kurzweil and others in the futurist camp. In this view, technology is seen as the vector
which will allow humans to transcend the limits of our current biological form: as the
narrative goes, there would no longer be a material separation between virtual and
real, as well as machine and human (2017:18-19). However, the actual possibility of
such a transcendence occurring, is almost certainly – at least in the short to medium
term – positioned in the realms of science fiction. For Anderson, this concept of a
transcendent posthumanism is anchored by the assumption that the human species is
separated from the animal kingdom and therefore should take a dominant role in
human-animal relations (33). Anderson suggests that a more productive thread – in
sympathy with the critical posthumanism pioneered by Hayles and others (see
Braidotti 2006, 2013; Ferrando 2013, 2016)– would be to ‘remind ourselves that
130

humans are always already part of the biosphere’: such a position might allow ‘us to
learn to live with these nonhuman others rather than in opposition to, in domination
of [them]’ (Anderson 2017:37).

Significantly, the transcendent posthuman bodies narrativized by Moravec and


Kurzweil, are also highly visible in current representations of desktop HMD devices
for the delivery of MR/VR. As well as appearing regularly in the marketing of desktop
MR/VR products in gaming and entertainment industries marketing campaigns – for
example, MR headsets like the Hololens, discussed in chapter 1 – this is also evidenced
in research that explores ways to construct greater immersion in virtual environments,
encouraging a participant to more completely ‘believe’ in the simulation (Robertson,
Czerwinski, and Van Dantzich 1997; Slater 2018; Zhang, Zhang, Chang, Aziz, Esche
and Chassapis 2018). Furthermore, such emphasis on 'being in' the simulation, is in
tension with conclusions from behavioural science and neuroscience, where questions
have been raised as to the level of immersion that might be desirable, or indeed, safe
for an interactant (Bowman and McMahan 2007; Steinicke and Bruder 2014; Aghajan,
Acharya, Moore, Cushman, Vuong, and Mehta 2015). As well, artwork by artists
performing in virtual space wearing HMD devices – such as the physiologically
demanding VR endurance performances of Micha Cárdenas (2008) or Mark Farid
(2014) – attribute a range of negative phenomena to long periods of stereoscopic
immersion, such as visual disturbances, cognitive distress and physiological fatigue
(Cárdenas, Head, Margolis, and Greco 2009).

In my Leap Motion/Vive performances in MR, I do not leave the body behind, so much
as multiply its instances, so that it is emergent in different modes that trouble the
artificially imposed separation between physical and digital topologies. Taken up by
different modes of matter, my body re-emerges in partial transmissions as code,
signal, and movement. Through strategies of performative interfacing via the software
assemblage formulation, my corporeality is contingently diffracted to digital space,
while concurrently adapting to new senses of the physical. The feeling of embodiment
generated here is a transient physiological state that intra-actively shifts with
technology. My research has chosen to articulate a view of human becoming with
technology, where the body is not a discrete or fully formed entity prior to contact with
technological devices or computational networks. Contact Zone is the last phase in a
research process that began with an interrogation of digital augments as informatic
131

overlays, and has ended with a materialist understanding of the potential of


augmented materialities as performative and diffractive entities.

1 Contact Zone is the name of the exhibition to be installed as the examination of this work.
Exhibition dates are 19-23 November 2018, at the Black Box, University of New South Wales,
Faculty of Art and Design, Greens Road, Paddington, Sydney.
2 Interactive Plant Growing (1992) is the first of many plant sensing artworks made by

Sommerer and Mignonneau. In the permanent collection of the ZKM Media Museum, Karlsruhe.
3 This artwork has had over 100 presentations since 2007, notably at ZKM Karlsruhe Centre for

Art and Media (Germany), at Daejeon Museum of Art (Korea), at Museum Art Gallery of Nova
Scotia (Canada), at National Centre for Contemporary Arts (Moscow), at Contemporary Art
Museum Raleigh (USA). http://www.scenocosme.com/akousmaflore_en.htm.
4 Barad gives an example of diffraction from classical physics, where ocean waves hit a rock, and

the rock operates like a ‘diffraction apparatus’ causing the wave to spread, overlap and bend in all
directions. In Barads’ quantum elaboration of diffraction, she argues that waves are not ‘things’
or ‘objects’ but ‘disturbances (which cannot be localised to a point) that propagate in a medium’
(74-76). In a quantum understanding, the intra-active phenomena caused by diffractive
processes, such as the impact and force of the waves hitting the rock – actually shift the matter
that molecularly composes it. Therefore, diffraction effects not only in the wave itself (the visible
pattern of interference seen by the human eye), but also the object the wave hits (a phenomena
that would only be visible using a quantum imaging apparatus).
5 First performance at the Shiraz Festival of the Arts, Iran, 1975.
6 For example, in biological science, a number of techniques exist that base themselves on

inserting needles into the root system of plants to measure electrical capacitance, thereby
determining root size and mass (Chloupek 1972; Rajkai, Végh, and Nacsa 2005). These, and
similar, scientific techniques for measurement, have been adapted by artists since the 1970s into
live performance devices that transduce energy from plants into voltage.
7 Notably, Lowenberg and Lifton created bio-sensing artworks included in the film the Secret Life

of Plants (1976).
8 John Baldessari (1972) Duration 00:18:08, United States, B&W, 1/2” open reel video.
9 Jessica Morgan (2009) “Somebody to talk to: John Baldessari.” Tate Etc. issue 17: Autumn

2009. Retrieved from http://www.tate.org.uk/context-comment/articles/somebody-talk


(accessed 4 April 2016).
10 For example, Kac’s ‘plantimal’ called ‘Edunia’ combined his own genetic material with that of a

petunia, creating an entirely new transgenic creation that contests ‘our understanding of the
‘natural’ environment as well as of the environment of art’ (Osthoff 2009:1).
11 Research by biological scientists at the experimental edge of plant sensing, draws conclusions

regarding plant intelligence through an analysis of growth as a behavioural practice. Trewavas


(2005) argues that ‘plants transduce and transmit sophisticated sensing systems that are
analogous to intelligence’; while Gagliano (2012) argues that the clicking sounds produced by
young corn during growth allows them to map the location of water sources and grow toward
those sources. Gagliano contends that plants use calcium-based networks to transmit signals that
may form the basis of a memory map.
12 First exhibited at SIGGRAPH '93, 20th Annual Conference and Exhibition on Computer

Graphics and Interactive Techniques, Anaheim, CA, USA — August 02 - 06, 1993.
13 Retrieved from http://www.interface.ufg.ac.at/christa-

laurent/WORKS/CONCEPTS/PlantsConcept.html (accessed 7 March 2015).


14 Haraway notes: ‘Taking themselves to be the only actors, people reduce other organisms to the

lived status of being merely raw material or tools’ (Haraway 2007: 206).
15 Analogies with ‘song’ are invoked in many descriptions, such as here:

https://www.digitalartarchive.at/database/general/work/akousmaflore.html
132

16 Following a humanist analysis, nature is considered as a resource within the purview of homo
sapiens, where its role in supporting human life is foregrounded. An increasing number of
thinkers break with this convention (including Haraway 2000; Plumwood 2002; Roa-Rodríguez
and van Dooren 2008; Cubitt 2017) to argue that nature classified as human property only serves
to encourage its exploitation. Resource-driven approaches to nature pose a fundamental
problem, since they position nature under the control of a regulatory web that is structured by
flows of capital. Irigaray and Marder frame what is at stake: ‘The fight over the appropriation of
resources will lead the entire planet to an abyss unless humans learn to share life, both with each
other and with plants. … The lesson taught by plants is that sharing life augments and enhances
the sphere of the living, while dividing life into so-called natural or human resources diminishes
it’ (Irigaray and Marder 2014).
17 The written description of Contact Zone given here is specific to the installation at the Black

Box, 19-23 November 2018. However, the photographic images referenced in Figures 62-70 are
from Contact Zone video documentation (see Appendix 1), which was a rehearsal for the
examination performance, enacted at the artist’s studio.
18 Timecode reference in video documentation is 01:45. See Appendix 1.
19 Timecode reference in video documentation is 02:46-04:25. See Appendix 1.
20 During the 15 minute performance, the sound technician is primarily mixing the signals from

the MIDI Sprout with the ultra-sonic microphone inputs, paying particular attention to the
analogue feedback so it does not overpower the quieter micro-impulses from the agave plant.
21 Having performed fairly extensively as a musician, there is no equivalent I have found to this

kind of ‘playing’. That is why I have preferred the term ‘modulation’ in this thesis. Playing
assumes that interfacing will occur through a recognised and verified layout, determined in
advance. In the agave modulation – as well as in the visitor-led experience that precedes it –
there is no pre-determined structure of ‘notes’, since the signals emitted by the plant in any given
instant are unknown in advance. Signal only becomes a ‘note’ after it is converted to a MIDI
sequence in Logic Pro X.
22 Timecode reference in video documentation is 00:54. See Appendix 1.
133

CONCLUSION

The tangible outcomes of this practice-based research are knotted together as two
interwoven strands: the software assemblage as an assistive formulation for
generating an alternative version of MR; and, a set of techniques and methods for
performative interfacing with augmented materialities. Both strands have emerged in
tandem, through strategically enacted re-combinations of theory and practice. To re-
work digital augments from their conventional position as informatic overlays, toward
an entangled relation that would offer the potential for oscillations across digital and
physical topologies, I have investigated the software assemblage as a radical
networked arrangement. Pursued through a suite of techniques that performatively
interface with MR, the speculation that digital augments can be more that informatic,
has led to the conception that augments might also exceed the purely digital: that they
are, in fact, augmented materialities.

Digital augments, in the sense they are outlined in computer science and engineering,
are considered to be discretely formed in advance of their mutual and reciprocal
contact with one another. It has been argued that, while digital augments are data,
they do not necessarily need to be fixed as informatic: I have suggested that the
informatic overlay approach is unnecessarily dominant in current practice and
discourse in MR, having migrated from technical paradigms, through to culturally
engaged fields. Therefore, the first task of this thesis was to clarify the operations of
digital augments as data, outside of the informatic. Loosening data from the informatic
allowed augments to be imbricated with software assemblages, where they were re-
combined with unexpected materials, such as living plants. Within the software
assemblage formulation, performative interfacing has operated as a strategy with
which to generate MR artwork that is iterative and affective. Using performative
interfacing as a strategy, also de-structured the normative uses of certain technical
instruments for delivering MR (like the Leap Motion) and gave consideration to the
ways that we might generate tactile, signaletic, algorithmic, and gestural, patterns of
interference. The results of these patterns (left behind on bodies-data-plants) are the
134

intra-active phenomena that are iteratively co-constituted with the software


assemblage as a dynamic material arrangement.

Through the software assemblage, this research has developed techniques for
affectively co-composing with expressive conjunctions of materials, including:
recursive strategies for modulating digital augments; amplifications of bio-electrical
data from plants to produce augmented audio; choreographing hand micro-gestures
in tactile and signaletic connections with both augments and plants; and passing
augments through the Leap Motion interface in two hardware configurations,
handheld and head mounted, eventually folding the two methods together in Contact
Zone.

Heterogeneous modes of matter have been articulated through the software


assemblage. Code as digital matter, where attention is paid to relations of co-
composition between human and nonhuman. Signal as electrical matter, in two
primary materializations: as the infrared signal from the Leap Motion interface; and,
as bio-electrical data captured from plants. As well, signals have been both digital
(such as from the webcam) and analogue (such as from the piezo or ultrasonic
microphones). Thirdly, the human body as matter, systemically constrained within
limits of flesh, yet open to potentials that might produce a modified subjectivity, and
senses of perception that beckon new embodiments.

The network of matter and materials arranged by the software assemblages in this
research, can be broadly understood as a relational field of movement, further
troubled by the patterns of interference caused by material diffractions. Here,
entanglements between modes of matter (infrared, bio-electrical, fleshy, and digital)
shifted constellations of bodies-plants-data, as they dynamically and rhythmically
aligned in motion. Moreover, alignment was not considered as pre-given via a
computational system. Rather, it was argued that, since the emergence of augmented
materialities is entangled, intra-actively, with corporeality and the signaletic,
alignment must likewise be rhythmic, speciated, and felt through the relational field
with which it co-emerged (Manning 2o13:210). Techniques for performing in such
rhythmic alignments, used my body movements to diffract signaletic and digital flows,
and embraced tactility to shift relations with other materialities. Thought was given to
methods that might bring together structurally separate sites, such as physical and
135

digital. To achieve this I explored: the disjunctive software tracking afforded by signal
inertia (chapter 3); the enfolded materiality of the infrared signal as it passed through
and re-emerged from, the Leap Motion/Vive apparatus (chapters 3 and 4); and, ways
that I could use my body to shift relations with flows of digital and signaletic matter,
both of which were understood as augmented materialities.

While my research builds on technical innovations from industrial and commercial


MR, it is also critical of the methods used to restrict augments to screen-based formats,
which favour technical elements over corporeal concerns. In chapter 1, it was argued
that the migration of Milgram and Kishino’s taxonomic approach, from engineering to
cultural fields such as entertainment and gaming, and from there to artistic fields such
as media art, has led to the commonly held view that MR is a technical medium for the
delivery of onscreen information. I closely examined some of the supporting
assumptions found in the RV Continuum, in particular the ‘presence metaphor’,
deployed to make a user feel they are a convincing part of a virtual world, and
‘reproduction fidelity’, where the resolution and quality of screen displays are
emphasised as integral to a seamless experience (Milgram and Kishino 1994:1321).
This analysis was extended in chapter 3, where I commented on Paul Milgram’s (2007)
conception of the video image stream as ‘unmodelled’ data, conveying a synthesised
‘real’ world.

To develop the notion of performative interfacing in MR, I explored experimental


approaches to choreography. Initially, drawing on the ideas and practice of William
Forsythe, Yvonne Rainer, Merce Cunningham, and Erin Manning, I explored the
performative potential of digital augments as ‘choreographic objects’. This approach
renegotiated data and the physical gestural hand as co-emergent entities. Then,
influenced by the experimental sound practice of Miya Masaoka, the bio-electrical
signals from living plants were introduced as sonic material that would also operate as
augmented audio. Tactility – my touching the agave to modulate sound, for example
– emerged as a corporeal technique that would shift the bio-electrical and the digital
in tandem. Critically, artworks that investigated embodied approaches to VR (by Char
Davies as well as Adam Nash and Stefan Greuter), provided inspiration for my
investigations with the Leap Motion/Vive apparatus. Taking an embodied approach
that de-privileged the visual, investigated the capacity of augmented materialities to
move with my performing body’s choreographic relations in the gallery space.
136

Insodoing, I felt new senses of embodiment that re-worked my performing body as it


made contact with apparatuses of augmentation. Chapter 4 took up the idea that
plants might be a form of ‘body’, investigating media artworks by Christa Sommerer
and Laurent Mignonneau, as well as Gregory Lasserre and Anais met den Ancxt, that
coupled human touch with reactive signalling. Exploring a critical posthuman
perspective on nonhuman matter, the Tactile Signal performances and Contact Zone
modulated digital and bio-electrical matter as it circulated through a hybrid physical-
digital topology.

Several modes of signal have been explicated as diffractive, nonhuman forces that
emerge in the relation: at first, signal inertia caused by webcam delay in the Wild
Versions, opened up a durational gap that modulated my hand gestures and the data
system’s response; then, in the Tactile Signal performances, plants as producers of
bio-electrical signals, manifested an alternative approach to augmented audio; and, in
the Tactile Signal performances, as well as Contact Zone, the Leap Motion’s infrared
signal was used to re-work the materiality of the digital augments, enfolding them
within an electromagnetic plane. Through performative intra-actions in the Tactile
Signal performances as well as in Contact Zone, augmented materialities emerged
through a relational multitude of entangled movements, where human and nonhuman
forces emerged, co-composed, and diffracted through one another.

Approaching MR as a software assemblage has been a productive way to interrogate


the subliminal notion of control that Chun observed in software culture (discussed in
my Introduction). The conception of software articulated in my research is not
software as a control mechanism, but rather software as a relational force, that affects
and is affected by social, cultural, embodied and environmental practices. In my
artistic research, software assemblages deploy code, algorithms, and signals, to make
patterns of interference that trouble more stratified formulas of augmentation, such
as the informatic overlay and the taxonomy of the RV Continuum. Diffraction is one
strategy that can be productively applied to media art inquiries that see value in
challenging notions of control in software, rather than accepting control as desirable.
I have attempted to show some of the techniques and methods that can be passed
through diffractive thinking, yet many others are possible. In future directions, my
research will be exploring other ways that diffraction might generate new modes of
MR for media art, and further challenge accepted conceptions of the informatic
137

inherited without interrogation from mainstream engineering and computer science


paradigms. Using the software assemblage approach as a prism though which to
diffract the complex phenomena that emerge between devices, software, participants,
and plants, I have articulated an approach to MR that pays attention to the actual
relations of interfacing with augmented materialities.

An unexpected impact of this project is that the performative approach of the software
assemblage has resonated with certain artistic practices from mobile AR that likewise
explore aspects of embodiment. In 2016, the notable AR/VR artists Tamiko Thiel and
Will Pappenheimer presented a paper at the College Art Association (CAA) Conference
in Washington, where they elaborated the software assemblage formulation in
combination with their practice of décollage in public space. This talk was followed
soon after by an article in the well-regarded Media-N: the Journal of the New Media
Caucus (Thiel and Pappenheimer 2016). Such unexpected interest from practitioners
in the AR art community – producing artwork entirely different from my own –
underscores the potential for the pragmatic application of the software assemblage
outside of my forays.

There is significant future scope for research in the field of MR in media art. Many
uninterrogated issues remain, relating to the affective potential of bodies, the impact
of the virtual on our expanded senses of perception, and the nuanced modes of
embodiment that stretch outside of screen space (to name just a few). This Doctoral
dissertation has attempted to analyse the notion of what an augment might do, and
open that to a broader and more extensive conception of materiality than was afforded
by taxonomic models such as the RV Continuum and commercial/engineering
paradigms like the informatic overlay.

With Haraway, my hope is that this research has in some small way diffracted ‘the rays
of technoscience [to] get more promising interference patterns on the recording films
of our lives and bodies’, where ‘life’ is a category that encompasses the nonhuman, and
‘bodies’ are digital, human, and plant (Haraway 1997:16). Rather than setting up a
prescription for how to create or design with augments, I have asked what augmented
materialities are capable of, as they resonate through rhythmic constellations, oscillate
in recursive relays, and generate disturbances that modulate through networks.
Apprehending augmented materialities via the software assemblage takes us beyond
138

the informatic overlay approach, teasing out some of the trouble that lurks in an
alternatively assembled MR, located at the edge of control.
139

References

Aceti, Lanfranco and Richard Rinehart. 2013. Not Here Not There, Leonardo
Electronic Almanac. 192.
Andersen, Christian Ulrik, Søren Bro Pold, eds. 2011. Interface criticism: Aesthetics
beyond the buttons. Aarhus: Aarhus University Press.
Andersen, Christian Ulrik and Søren Bro Pold. 2018. The Metainterface: The Art of
Platforms, Cities, and Clouds. Cambridge, MA: The MIT Press
Ando, Ki, Yuki Hasegawa, Tamaki Yaji, and Hidekazu Uchida. 2011. "Study of plant
bioelectric potential response due to photosynthesis reaction". Pp. 337-342.
IEEJ Transactions on Sensors and Micromachines, (131).
Ariso, José Maria, ed. 2017. Augmented Reality: Reflections on Its Contribution to
Knowledge Formation. Berlin: De Gruyter.
Aghajan, Zahra M., Lavanya Acharya, Jason J. Moore, Jesse D. Cushman, Cliff
Vuong, and Mayank R. Mehta. 2015. "Impaired spatial selectivity and intact
phase precession in two-dimensional virtual reality". Nature. 18:121.
Azuma, Ronald. T. 1997. “A survey of augmented reality.” in Presence: Teleoperators
and virtual environments 6. 4:355-385. Cambridge, MA: The MIT Press
Journals.
Baldessari, John. 1972. Teaching a plant the alphabet. Duration 00:18:08,
B&W, 1/2” open reel video.
Ballard, Dana H., and Christopher M. Brown. 1982. Computer Vision: Stereo Vision
and Triangulation.
Barad, Karen. 1996. "Meeting the universe halfway: Realism and social
constructivism without contradiction". Pp 161-194. In Feminism, science, and
the philosophy of science, edited by J. Nelson. Switzerland: Springer Science
& Business Media.
Barad, Karen. 2003. “Posthumanist performativity: Toward an understanding of how
matter comes to matter”. Pp 801-831. Signs: Journal of women in culture and
society (283).
Barad, Karen. 2007. Meeting the universe halfway: quantum physics and the
entanglement of matter and meaning. Durham, NC: Duke University Press.
140

Barakonyi, István, and Dieter Schmalstieg. 2005. “Augmented reality agents in the
development pipeline of computer entertainment”. In Proceedings of the
International Conference on Entertainment Computing. 345-356.
Barlow, Cleve. 1991. Tikanga Māori. Auckland: Oxford University Press.
Bekele, Mafkereseb Kassahun, Roberto Pierdicca, Emanuele Frontoni, Eva Savina
Malinverni, and James Gain. 2018. "A Survey of Augmented, Virtual, and
Mixed Reality for Cultural Heritage". Journal on Computing and Cultural
Heritage (112).
Beloff, Laura, and Jonas Jørgensen. 2016. “The Condition: Towards Hybrid Agency”.
Pp 14-19. In CULTURAL R>EVOLUTION: Proceedings of the 22nd
International Symposium on Electronic Art.
Benford, Steve, Martin Flintham, Adam Drozd, Rob Anastasi, Duncan Rowland, Nick
Tandavanitj, Matt Adams, Ju Row-Farr, Amanda Oldroyd, and Jon Sutton.
2004. “Uncle Roy All Around You: Implicating the City in a Location-Based
Performance.” Retrieved 9 September 2014.
http://www.blasttheory.co.uk/wp-
content/uploads/2013/02/research_uraay_implicating_the_city.pdf
Benford, Steve and Gabriella Giannachi. 2011. Performing mixed reality. Cambridge,
MA: The MIT Press.
Bennett, Jane. 2009. Vibrant matter: a political ecology of things. Durham, NC:
Duke University Press.
Berry, David. M. 2011. The philosophy of software: Code and mediation in the
digital age. London: London: Palgrave Macmillan.
Billinghurst, Mark, Adrian Clark and Gun Lee. 2015. "A survey of augmented reality".
Pp. 73-272. In Foundations and Trends in Human–Computer Interaction
(8).
Blast Theory 2003. Uncle Roy All Around You. Site specific mixed media artwork,
various locations, London, U.K. Premiered at the Institute of Contemporary
Arts in London in June 2003. Retrieved 11 April 2014.
https://www.blasttheory.co.uk/projects/uncle-roy-all-around-you/
Blast Theory. 2009. FlyPad. Site specific mixed media artwork designed for the
Public Gallery, West Bromich, England. Retrieved 14 August 2016.
https://www.blasttheory.co.uk/projects/flypad/
141

Bögre, László, and Gerrit Beemster, eds. 2008. Plant growth signaling. Springer
Science & Business Media.
Bolter, Jay and Richard Grusin. 1999. Remediation: Understanding new media.
Cambridge, MA: The MIT Press.
Bolter, Jay & Diane Gromala. 2003. Windows and mirrors: Interaction design,
digital art, and the myth of transparency. Cambridge, MA: The MIT Press.
Boj, Clara, and Diego Díaz. 2008. "The Hybrid City: Augmented Reality for
Interactive Artworks in the Public Space". In The Art and Science of Interface
and Interaction Design. 141-161.
Bonta, Mark and John Protevi. 2004. Deleuze and Geophilosophy a guide and
glossary. Scotland: Edinburgh University Press.
Braidotti, Rosi. 2006. "Posthuman, All Too Human". Theory, Culture & Society. 23:
197–208.
Braidotti, Rosi. 2013. The posthuman. Oxford, U.K.: Polity Press.
Buren, Daniel, and Thomas Repensek. 1979. "The function of the studio." October.
10:51-58.
Cage, John. 1975. Child of tree. Edition Peters Group, Frankfurt/Main, Leipzig,
London, New York.
Candy, Linda. 2006. Practice-based research: a guide. Retrieved June 16, 2015.
https://www.creativityandcognition.com/resources/PBR%20Guide-1.1-
2006.pdf. Creativity and Cognition Studios, UTS Sydney.
Cárdenas, Micha, Christopher Head, Todd Margolis, and Kael Greco. 2009.
"Becoming Dragon: a mixed reality durational performance in Second Life".
The Engineering Reality of Virtual Reality. 7238:801-807.
Cardiff , Janet & George Bures Miller. 2014. the City of Forking Paths, Augmented
Reality app available in various locations, The Rocks, Sydney, Australia.
Retrieved 16 April 2014. https://itunes.apple.com/us/app/the-city-of-
forking-paths/id870332593?mt=8.
Carmigniani, Julie, and Borko Furht. 2011. "Augmented reality: an overview". Pp. 3-
46. In Handbook of augmented reality, edited by Borko Furht. Switzerland:
Springer International Publishing.
Carroll, John M., ed. 2003. HCI models, theories, and frameworks: Toward a
multidisciplinary science. Amsterdam, Netherlands: Elsevier.
142

Caudell, Thomas P., and David W. Mizell. 1992. “Augmented reality: An Application
of Heads-up Display Technology to Manual Manufacturing Processes.” Pp
659-669. InProceedings of the Hawaii International Conference on System
Sciences.
Chloupek, O. 1972. “The relationship between electric capacitance and some other
parameters of plant roots,” Biologia Plantarum (143). Pp. 227–230. doi:
10.1007/bf02921255.
Chun, Wendy Hui Kyong. 2011. Programmed visions: Software and Memory.
Cambridge, MA: The MIT Press.
Cohen, Michael, Shigeaki Aoki, and Nobuo Koizumi. 1993. "Augmented audio reality:
Telepresence/VR hybrid acoustic environments." In Proceedings of the 2nd
IEEE International Workshop on Robot and Human Communication. 361-
364.
Conger, Kate. 2016. " Niantic responds to senate inquiry into Pokémon GO privacy".
TechCrunch Magazine. Retrieved January 12, 2017
https://techcrunch.com/2016/09/01/niantic-responds-to-senate-inquiry-
into-pokemon-go-privacy/.
Cubitt, Sean. 2017. Finite media: Environmental implications of digital
technologies. Durham, NC: Duke University Press.
Davies, Char. 1995. "Osmose: Notes on Being in Immersive Virtual Space". ISEA ’95
Conference Proceedings, Montreal, Canada.
Davies, Char. and John Harrison. 1996. "Osmose: towards broadening the aesthetics
of virtual reality". Pp 25-28 In Computer Graphics (304). ACM SIGGRAPH.
Davis, Lucy. 2011. “ In the Company of Trees.” Pp.43-62. In Antennae: the Journal
of Nature and Culture. Issue 17 Summer.
DeLanda, Manuel. 1998. “Meshworks, hierarchies, and interfaces”. John Beckman,
ed. The Virtual Dimension: Architecture, Representation, and Crash Culture.
New York: Princeton Architectural Press. Retrieved December 12, 2014
http://cumin-cad.architexturez.net/system/files/pdf/7f71.content.pdf
DeLanda, Manuel. 1997. A thousand years of nonlinear history. New York: Zone
Books.
DeLanda, Manuel. 2008. "The expressivity of space". Canadian Art Magazine, Issue
252. Pp.103-107.
143

Deleuze, Gilles. and Felix, Guattari. 1987. A thousand plateaus: Capitalism and
schizophrenia. Trans. Brian Massumi. Minnesota: University of Minnesota
Press
Dix, Alan. 2009. "Human-Computer Interaction". In Encyclopedia of Database
Systems, edited by L. Liu and M.T Özsu. Boston, MA: Springer Publishing.
Dourish, Paul. 2004. Where the action is: the foundations of embodied interaction.
Cambridge, MA: The MIT Press.
Dourish, Paul. 2017. The stuff of bits: An essay on the materialities of information.
Cambridge, MA: The MIT Press.
Doyle, D. 2014. “New Opportunities for Artistic Practice in Virtual Worlds”. In
Cyberworlds CW., 2014 International Conference on pp. 321-326.. New York;
IEEE Press.
Dyson, Freeman J. 1998. "Science as a craft industry". Science, 2805366., pp.1014-
1015.
Cambridge, MA: The MIT Press.
Ekman, Ulrik. 2013. "Of Intangible Speed: 'Ubiquity' as Transduction of
Interactivity". Pp. 279-309. In Throughout: Art and culture emerging with
ubiquitous computing, edited by Ulrik Ekman. Cambridge, MA: The MIT
Press.
Ekman, Ulrik. 2012. "Of the Untouchability of Embodiment I: Rafael Lozano-
Hemmer's Relational Architectures", retrieved from
https://journals.uvic.ca/index.php/ctheory/article/view/14943/5838
Engberg, Maria. and Jay Bolter. 2014. “Cultural expression in augmented and mixed
reality”. Pp.3-9. In Convergence (20:1).
Ferguson, Russell, M. Tucker and John Baldessari. 1990. Discourses Conversations
in Postmodern Art and Culture. New York : New Museum of Contemporary
Art ; Cambridge, MA. : the MIT Press.
Ferrando, Francesca. 2013. "Posthumanism, transhumanism, antihumanism,
metahumanism and new materialism: Relationships and differences." Pp 26-
32. In Existenz, An International Journal of Philosophy, Religion, Politics,
and the Arts. Volume 8. No. 2, Fall 2013.
Ferrando, Francesca. 2016. “The Party of the Anthropocene: Post-humanism,
Environmentalism and the Post-Anthropocentric Paradigm Shift.” Pp 159-173.
In Relations, 4.2.
144

Ferrari Maud, Brian Wisenden and Douglas Chivers 2010. "Chemical ecology of
predator–prey interactions in aquatic ecosystems: A review and prospectus."
Pp 698-724. In Canadian Journal of Zoology (88).
Fischer, J.C. 1975. Piano tuning: a simple and accurate method for amateurs.
Courier Corporation.
Fisher. J. 1999. “Char Davies.” Pp 53-54. In Parachute (94).
Fromm, J. and Lautner, S. 2007. "Electrical signals and their physiological
significance in plants." Pp 249-257. In Plant, cell & environment (303).
Fuller, Mathew. 2005. Media ecologies: Materialist energies in art and
technoculture. Cambridge, MA: The MIT Press.
Fujii, K. and Okumura, Y. 2012. "Effect of earth ground and environment on body-
centric communications in the MHz band." In International Journal of
Antennas and Propagation.
Freeman, John. C. et.al 2012. "ManifestAR: an augmented reality manifesto." Pp
82890D-82890D. In IS&T/SPIE Electronic Imaging. International Society
for Optics and Photonics.
Friedberg, Anne. 2006. The virtual window: from Alberti to Microsoft. Cambridge,
MA: the MIT Press.
Gagliano, Monica, Mancuso, S. and Robert, D. 2012. "Towards understanding plant
bioacoustics." Pp.323-325. In Trends in Plant Science (176).
Gagliano. Monica. and Renton, M. 2013. "Love thy neighbour: Facilitation
through an alternative signalling modality in plants." BMC Ecol 2013; 13:19;
PMID:23647722; http://dx.doi. org/10.1186/1472-6785-13-19
Galloway, Alexander. 2013. The Interface effect. Oxford, England: Polity Press.
Gemeinboeck, Petra. and Saunders, Rob. 2011. “Other Ways Of Knowing: Embodied
Investigations of the Unstable, Slippery and Incomplete." In, The Fibreculture
Journal, FCJ-120.
Geroimenko, Vladimir. ed. 2014. Augmented reality art: from an emerging
technology to a novel creative medium. 1st edition. Switzerland: Springer
International Publishing.
Geroimenko, Vladimir. ed. 2018. Augmented reality art: from an emerging
technology to a novel creative medium. 2nd edition. Switzerland: Springer
International Publishing.
145

Giannachi, Gabriella, Duncan Rowland, Steve Benford, J. Foster, Matt Adams, and
Alan Chamberlain. 2010. "Blast Theory's Rider Spoke, its documentation and
the making of its replay archive." Pp 353-367. In Contemporary Theatre
Review (203)
Gibson, Prudence. 2018. The Plant Contract: Art’s Return to Vegetal Life.
Amsterdam: Brill.
Grau, Oliver. 2003. Virtual Art: From illusion to immersion. Cambridge, MA: the
MIT Press.
Gregg, Melissa, and Gregory J. Seigworth, eds. 2010. The affect theory reader.
Durham, NC: Duke University Press.
Greuter, Stefan, and David Roberts. 2014. "Spacewalk: Movement and interaction in
virtual space with commodity hardware." Pp. 1-7. In Proceedings of the 2014
Conference on Interactive Entertainment Association for Computing
Machinery.
Guna, J., Jakus, G., Pogačnik, M., Tomažič, S., & Sodnik, J. 2014. "An analysis of the
precision and reliability of the leap motion sensor and its suitability for static
and dynamic tracking." Sensors, 142., pp. 3702-3720.
Hall, Michael. 2011. Plants as persons: a philosophical botany. Albany: State
University of New York Press.
Hansen, Mark. B., 2012. Bodies in code: Interfaces with digital media. New York:
Routledge.
Haraway, Donna J. 1997. Modest_Witness@ Second_Millennium .FemaleMan
_Meets_OncoMouse: Feminism and Technoscience. New York: Routledge.
Haraway, Donna J. 2000. How like a leaf: an interview with Thyrza Nichols
Goodeve. New York: Routledge.
Haraway, Donna J. 2003. The companion species manifesto: dogs, people, and
significant otherness. Chicago, Ill. : Prickly Paradigm.
Haraway, Donna J. 2007. When Species Meet. Minneapolis: University of Minnesota
Press.
Haraway, Donna. "Anthropocene, Capitalocene, Chthulhocene. Donna Haraway in
Conversation with Martha Kenney." Pp.255-270. In Art in the Anthropocene,
edited by Heather Davis and Etienne Turpin. London, U.K.: Open Humanities
Press.
146

Harma, Aki, Julia Jakka, Miikka Tikander, Matti Karjalainen, Tapio Lokki, and Heli
Nironen. 2003. "Techniques and applications of wearable augmented reality
audio." In Audio Engineering Society Convention Proceedings .114-119.
Hayles, Katherine. 1999. How we became posthuman: virtual bodies in cybernetics,
literature, and informatics. Chicago: University of Chicago Press.
Henchoz, Nicholas, Vincent Lepetit, Pascal Fua, John Miles. 2011. “Turning
Augmented Reality into a media: Design exploration to build a dedicated
visual language.” Pp 83-89. In Proceedings of the International Symposium
on Mixed and Augmented Reality. New York; IEEE Press.
Hollerer, Tobias, Dieter Schmalstieg and Mark Billinghurst. 2009. “AR 2.0: Social
Augmented Reality - social computing meets Augmented Reality”. 8th IEEE
International Symposium on Mixed and Augmented Reality. doi: 1
0.1109/ismar.2009.5336443. New York; IEEE Press.
Hookway, Branden. 2014. Interface. Cambridge, MA: the MIT Press.
Housefield, John. 2007. Sites of time: organic and geologic time in the art of Robert
Smithson and Roxy Paine. Cultural Geographies, 144., pp.537-561.
Huang, Weidong, Leila Alem, and Mark Livingston eds. 2012. Human factors in
augmented reality environments. Springer Science & Business Media.
Huhtamo, Erkki. 2004. "Trouble at the interface, or the identity crisis of interactive
art." Retrieved 9 October 2016.
http://mediaartscultures.eu/jspui/bitstream/10002/299/1/Huhtamo.pdf
Irigaray, Luce and Michael Marder. 2014. " Without clean air, we have nothing".
Retrieved 6 March 2015.
https://www.theguardian.com/commentisfree/2014/mar/17/clean-air-
paris-pollution-crime-against-humanity.
Johnson, C.G. 2003. “Towards a prehistory of evolutionary and adaptive
computation in music”. In Workshops on Applications of Evolutionary
Computation pp. 502-509. Springer: Berlin, Heidelberg.
Johnston, John. 2008. The Allure of Machinic Life: Cybernetics, Artificial Life, and
the new AI. Cambridge, MA: The MIT Press.
Jones, Mark. 1995. "Char Davies: VR through Osmosis". Pp. 24-28. In CyberStage,
Vol. 2 (1) (Fall 1995).
147

Juhász, L. and Hochmair, H.H. 2017. Where to catch ‘em all?–a geographic analysis
of Pokémon Go locations. Pp. 241-251. In Geo-spatial information science,
(203)
Kaiser, Phillip & Miwon Kwon. 2012. Ends of the Earth: Land Art to 1974. Prestel
Publishing.
Kent, James 2012. The Augmented Reality Handbook-Everything you need to know
about Augmented Reality. Emereo Publishing.
Klemmer, Scott, Hartmann, B. and Leila Takayama. 2006. “ How bodies matter: five
themes for interaction design.” Pp. 140-149. In Proceedings of the 6th
Conference on Designing Interactive systems. Association for Computing
Machinery.
Kwon, Miwon. 2004. One place after another: Site-specific art and locational
identity. Cambridge, MA: the MIT Press.
Lambert-Beatty, Carrie. 2008. Being watched: Yvonne Rainer and the 1960s.
Cambridge, MA: the MIT Press.
Leap Motion SDK. 2010-present. https://www.leapmotion.com/ Retrieved March 1,
2015.
Levin, Golan, Chris Sugrue and Kyle McDonald. 2014. Augmented Hand Series.
Cinekid Festival, Amsterdam. Retrieved 18 November 2016.
http://www.flong.com/projects/augmented-hand-series/
Lévy, Pierre. 2001. Cyberculture. Minneapolis: University of Minnesota Press.
Lichty, Patrick. 2014. "The Aesthetics of Liminality: Augmentation as an Art Form."
Pp. 99-125. In Augmented Reality Art, edited by Vladimir Geroimenko.
Switzerland: Springer International Publishing.

Lozano-Hemmer, Rafael. 2002. Relational Architecture 6: Body movies. Mixed


media site specific artwork. Presented at the Ars Electronica Festival at the OK
Centrum (Linz, Austria) in 2002.
Lozano-Hemmer, Rafael and David Hill, 2007. Under Scan. Mixed media site
specific artwork. Funded by the East Midlands Development Agency.
Lozano-Hemmer, Rafael. 2010. SandBox. Mixed media site specific artwork. Created
for Glow, Santa Monica Beach, Santa Monica, United States, 2010. Retrieved
18 May 2016 http://www.lozano-hemmer.com/sandbox.php.
148

MacKenzie, Adrian. 2002. Transductions: bodies and machines at speed. London,


U.K: Continuum.
MacKenzie, Adrian. 2006. Cutting Code: Software and Sociality. New York: Peter
Lang
Malone Nicholas and Kathryn Ovenden. 2017. "Natureculture". In the International
Encyclopedia of Primatology, edited by Agustín Fuentes. John Wiley & Sons,
Inc. Published 2017 by John Wiley & Sons, Inc.
Manning, Erin. 2009. Relationscapes: Movement, Art. Cambridge, MA: the MIT
Press.
Manning, Erin. 2013. Always more than one: Individuation’s dance. Durham, NC:
Duke University Press.
Manovich, Lev. 2001. the Language of new media. Cambridge, MA: the MIT Press.
Manovich, Lev. 2006 “The poetics of augmented space.” Pp 219-240. Visual
Communication (52). Sage Publications.
Manovich, Lev. 2013. Software takes command. London: A & C Black.
Marder, Michael. 2013. Plant-thinking: A philosophy of vegetal life.
New York: Columbia University Press.
Mariette, Nicholas. 2013. “Human factors research in audio augmented reality”. Pp.
11-32. In Human Factors in Augmented Reality Environments, edited by
Weidong Huang, Leila Alem, and Mark A. Livingston, eds. New York, NY:
Springer International Publishing.
Martinez, M., Sitawarin, C., Finch, K., Meincke, L., Yablonski, A. & Kornhauser, A.
2017. “Beyond Grand Theft Auto V for Training, Testing and Enhancing Deep
Learning in Self Driving Cars”. arXiv preprint arXiv:1712.01397.
Masaoka, Miya. 2000-2012. Pieces for Plants. Mixed media performance. First
performed at the Chapel of the Chimes in Oakland, California in 2001.
Retrieved 19 July 2018 http://miyamasaoka.com/work/2006/pieces-for-
plants-gallery-installation/
Massumi, Brian. 2011. Semblance and event: Activist philosophy and the occurrent
arts. Cambridge, MA: The MIT Press.
McCormick, John and Adam Nash. 2011. Reproduction. Mixed reality artwork.
McIntyre, Blaire, Jay Bolter, D., Moreno, E., and Hannigan, B. 2001. “Augmented
reality as a new media experience.” Pp. 197-206. In Proceedings of the IEEE
and ACM International Symposium on Augmented Reality.
149

Meyer, David. 2016. “Pokémon GO Maker Is Facing a Privacy Lawsuit Threat in


Germany” in Fortune Magazine. Retrieved 3 November 2016,
http://fortune.com/2016/07/20/pokemon-go-germany-privacy/
Milgram, Paul and Fumio Kishino. 1994. "A taxonomy of mixed reality visual
displays". Pp. 1321-1329. IEICE TRANSACTIONS on Information and
Systems (7712)
Milgram, Paul, Haruo Takemura, Akira Utsumi, and Fumio Kishino. 1995 “
Augmented reality: A class of displays on the reality-virtuality continuum.”
Pp. 282-293. In Telemanipulator and telepresence technologies (2351).
International Society for Optics and Photonics.
Milgram, Paul. 2006. "Some human factors considerations for designing mixed
reality interfaces." Retrieved 3 April 2014.
http://www.dtic.mil/docs/citations/ADA473283
Mignonneau, Laurent, Christa Sommerer and Lakhmi Jain, eds. 2008. The Art and
Science of Interface and Interaction Design vol. 1. Switzerland: Springer
International Publishing.
Morey, Sean. and John Tinnell, eds. 2017. Augmented reality: innovative
perspectives across art, industry, and academia. Parlor Press.
Morgan, Jessica. 2009. “Somebody to talk to: John Baldessari.” Tate Etc. issue 17:
Autumn 2009. Retrieved 4 April 2017. http://www.tate.org.uk/context-
comment/articles/somebody-talk
Mueller, Florian, and Matthew Karau. 2002. "Transparent hearing." Pp. 730-731, in
CHI'02 Extended Abstracts on Human Factors in Computing Systems.
Association for Computing Machinery.
Munster, Anna. 2006. Materializing new media: Embodiment in information
aesthetics. New Hampshire: Dartmouth College Press.
Munster, Anna. 2013. An Aesthesia of Networks: Conjunctive Experience in Art and
Technology. Cambridge, MA: The MIT Press.
Murphie, Andrew. 2002. “Putting the Virtual Back into VR”. Pp. 188-214. In a Shock
to thought: Expression after Deleuze and Guattari, edited by Brian Massumi.
London, U.K: Routledge.
Niehorster, Diederick C., Li Li, and Markus Lappe. 2017. "The accuracy and precision
of position and orientation tracking in the HTC Vive virtual reality system for
150

scientific research." i-Perception (8:3). Retrieved 5 June 2018.


https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5439658/
Nash, Adam and Stefan Greuter 2015. “Out of Space”, Everything is Data 14th
August – 26th of September 2015 at NTU ADM Gallery 2, Singapore.
Nash, Adam. 2016. "Out of Space". In Proceedings of the 22nd International
Symposium on Electronic Art. School of Creative Media, City University Hong
Kong. Retrieved 14 June 2017. http://www.isea-
archives.org/docs/2016/proceedings/ISEA2016_proceedings.pdf
Nash, Adam and Laurene Vaughan. 2017. "Documenting digital performance
artworks". Pp. 149-160. Documenting Performance: The Context and
Processes of Digital Curation and Archiving, edited byToni Sant. London,
United Kingdom: Bloomsbury Publishing.
Ohta, Yuichi and Hideyuki Tamura, eds. 2014. Mixed reality: merging real and
virtual worlds. Switzerland: Springer International Publishing.
OpenEndedGroup (Paul Kaiser, Shelley Eshkar, Merce Cunningham, Marc Downie.
2001, Loops, commissioned by MIT Media Lab. Mixed media artwork with
various iterations up until 2011. New VR iteration forthcoming. Retrieved 12
March 2015. http://openendedgroup.com.
Osthoff, Simone. 2009. "Invisible in plain sight, and as alive as you and I: An
Interview with Eduardo Kac". Flusser Studies, 8. Retrieved 8 September 2017.
http://www.flusserstudies.net/node/222
Pawade, D., Sakhapara, A., Mundhe, M., Kamath, A. and Dave, D. 2018. “Augmented
Reality Based Campus Guide Application Using Feature Points Object
Detection.” International Journal of Information Technology and Computer
Science IJITCS. (10:5) Mecs Publishing.
Peddie, John. 2017. Augmented Reality: where we will all live. Switzerland:
Springer International Publishing.
Penny, Simon. 2009. “Desire for virtual space: the technological imaginary in 90s
media art”. Retrieved 7 July 2015. http://simonpenny.net/2000Writings/
Penny, Simon. 2017. Making sense: cognition, computing, art, and embodiment.
Cambridge, MA: The MIT Press.
Perry, Simon. 2008. "Wikitude: Android App With Augmented Reality: Mind
Blowing". Retrieved 3 March 2014. http://digital-
151

lifestyles.info/2008/10/23/wikitude-android-app-with-augmented-reality-
mind-blowing/
Plumwood Val. 2002. Environmental Culture: The Ecological Crisis of Reason
New York: Routledge.
Pohatu. Taina.W. 2011. “Mauri: Rethinking Human Well-being.” In MAI Review,
(2011:3).
Portanova, Stamatia. 2013. Moving without a body: Digital philosophy and
choreographic thought. Cambridge, MA: The MIT Press.
Rainer, Yvonne 1966. Hand Movie. 8mm black and white film. Retrieved 20
December 2016. https://coub.com/view/80y37.
Rainer, Yvonne. 1974. Work 1961-73. Halifax, N.S.: Press of the Nova Scotia College
of Art and Design.
Rajkai, K., Végh, K. R. & Nacsa, T. 2005. “Electrical capacitance of roots in relation
to plant electrodes, measuring frequency and root media.” Pp. 197–210. In
Acta Agronomica Hungarica (532)
Reblitz, Arthur A. 1976. Piano servicing, tuning, & rebuilding: For the professional,
the student, the hobbyist. Vestal Press.
Reed, A.H 1963. Treasury of Māori folklore. Wellington, New Zealand: A.H. & A.W.
Reed.
Rheingold, Howard. 1991. Virtual Reality: Exploring the Brave New Technologies of
Artificial Experience and Interactive Worlds-From Cyberspace to
Teledildonics. England, London: Secker & Warburg.
Riley, Mathew and Troy Innocent. 2014. "The augmented bush walk: adaptation in
crossmedia ecologies." Pp. 234-247. In Proceedings of xCoAx 2014, Portugal:
Universidade do Porto.
Riley, Mathew and Adam Nash. 2014. "Contemplative interaction in mixed reality
artworks". Pp. 260-266. In Proceedings of the 20th International Symposium
on Electronic Art. Retrieved 8 May 2016. Dubai, United Arab Emirates, 30
October - 8 November 2014,
Roa-Rodríguez, C. and Thom van Dooren. 2008. “Shifting Common Spaces of Plant
Genetic Resources in the International Regulation of Property.” Pp 176-202
in The Journal of World Intellectual Property (113).
Robertson, G., Czerwinski, M. and Van Dantzich, M., 1997. "Immersion in desktop
virtual reality." In Proceedings of the 10th Annual ACM symposium on User
152

interface software and technology. Pp. 11-19. Association for Computing


Machinery.
Rousseau, J. 2016. " Mixed Reality Without Rose Colored Glasses." Retrieved from
https://www.artefactgroup.com/articles/mixed-reality-without-rose-colored-
glasses/
Royal, Charles. ed. 2003. The woven universe, the writings of Maori Marsden.
Masterton: Mauriora-ki-te-Ao/Living Universe Ltd.
Ryan, John C. 2015. “Plant-Art: The Virtual and the Vegetal in Contemporary
Performance and Installation Art.” Pp. 40-57. In Resilience: A Journal of the
Environmental Humanities, Vol. 2, No. 3. Nebraska: University of Nebraska
Press.
Sheller, Mimi and Hana Iverson, eds, 2015. ‘Editorial’, L.A.Re.Play: Mobile network
culture in placemaking, Leonardo Electronic Almanac, Volume 21 Number 1,
ISSN 1071-4391.
Scherrer, Camille. 2008. Le Monde des Montagnes. Mixed media artwork. Retrieved
from https://www.youtube.com/watch?v=kosAQpyxZAQ
Skwarek, Mark 2014. "Augmented Reality Activism." Pp. 3-29. In Augmented Reality
Art, edited by Vladimir Geroimenko. Switzerland: Springer International
Publishing.
Slater, M., 2018. "Immersion and the illusion of presence in virtual reality." British
Journal of Psychology.
Sommerer, Christa and Laurent Mignonneau. 1992. Interactive Plant Growing, an
interactive computer installation, Aktuelle Kunst aus Österreich, Vienna,
Austria. In permanent collection of the ZKM Media Museum, Karlsruhe.
Stadon, Julian. 2009. " Project SLARiPS: An investigation of mediated mixed
reality." Pp. 43-48. In Mixed and Augmented Reality-Arts, Media and
Humanities (ISMAR-AMH 2009). New York: IEEE Press.
Stadon, Julian 2015. "Hybrid Ontologies: An Attempt to Define Networked Mixed
Reality Art". In Proceedings of the 21st International Symposium on
Electronic Art ISEA2015. ISSN: 245-8611.
Steinicke, Frank and Gerd Bruder. 2014. "A self-experimentation report about long-
term use of fully-immersive technology." Pp. 66-69. In Proceedings of the 2nd
ACM symposium on Spatial user interaction. Association for Computing
Machinery.
153

Sutherland, Ivan.E. 1968. "A head-mounted three dimensional display". Pp. 757-764.
In Proceedings of the December 9-11, 1968, fall joint computer conference.
part I. Association for Computing Machinery.
Schwartz, Mark Donald, ed. 2003. Phenology: an integrative environmental
science. Switzerland: Springer International Publishing.
Thiel, Tamiko 2011. “Cyber Animism and Augmented Dreams.” in Leonardo Online
Almanac. Retrieved 7 September 2014. http://www.leoalmanac.org/wp-
content/uploads/2011/04/LEA_Cyber-Animism_TamikoThiel.pdf.
Thiel, Tamiko and Will Pappenheimer. 2013- ongoing. Mixed media artwork. First
staged at FACT Gallery Liverpool. Subsequent major iterations at ISEA2014
Dubai. and Virtuale Festival Switzerland. Retreived 18 April
2014.http://www.biomerskelters.com/
Thiel, Tamiko and Will Pappenheimer. 2016. “Assemblage and Décollage in Virtual
Public Space.” Media-N: the Journal of the New Media Caucus, CAA
Conference edition 2016. ISSN: 1942-017X.
Thomsen, Bodil M.S 2011 “The Haptic Interface: On Signal Transmissions and
Events.” In Interface criticism: Aesthetics beyond the buttons, edited by
Christian Ulrik Andersen and Søren Bro Pold. Aarhus University Press.
Thomsen, Bodil M.S. 2012 “Signaletic, haptic and real-time material.” Pp. 1-10.
In Journal of Aesthetics & Culture (41)
Trewavas, Anthony. 2005 “Green plants as intelligent organisms” Pp. 413–419. In
Trends in Plant Science (109), Retrieved 7 May 2017. doi:
10.1016/j.tplants.2005.07.005.
Ulmer, Gregory L., and John Craig Freeman. 2014. "Beyond the virtual public
square: Ubiquitous computing and the new politics of well-being." Pp. 61-79.
In Augmented reality art, edited by Vladimir Geroimenko. Switzerland:
Springer International Publishing.
Unity SDK. Retrieved March 3, 2014. https://unity3d.com/get-unity/download.
Van der Tuin, Iris and Rick Dolphijn. 2012. New materialism: Interviews &
cartographies. Open Humanities Press.
Van Krevelen, D.W.F. and Poelman, R. 2010. “A survey of augmented reality
technologies, applications and limitations.” International Journal of Virtual
Reality (92).
154

Vieira, Patricia, Monica Gagliano, and John Ryan. 2016. The green thread:
dialogues with the vegetal world. Lanham: Lexington Books.
Vincs, Kim, Alison Bennett, John McCormick, Jordan Beth Vincent, and Stephanie
Hutchison. 2014. "Skin to skin: Performing augmented reality." Pp. 161-174.
In Augmented Reality Art, edited by Vladimir Geroimenko. Switzerland:
Springer International Publishing.
Vincs, Kim. 2016. “Virtualizing Dance”. Pp. 263–82. In The Oxford handbook of
Screendance studies, edited by Douglas Rosenberg. New York: Oxford
University Press.
Weibel, Peter. 2001. Olafur Eliasson: Surroundings surrounded: essays on space
and science. Cambridge, MA: The MIT Press.
Weinzierl, Stefan, and Steffen Lepa. "On the Epistemic Potential of Virtual Realities
for the Historical Sciences. A Methodological Framework". Pp. 61-82. In
Augmented Reality: Reflections on Its Contribution to Knowledge
Formation, edited by José Maria Ariso. Berlin: De Gruyter.
Weichert, F. Bachmann, D. Rudak, B., & Fisseler, D. 2013. "Analysis of the accuracy
and robustness of the leap motion controller". Sensors, 135., pp. 6380-6393.
Whitelaw, M. 2012. "Transmateriality: Presence Aesthetics and the Media Arts" In
Ulrik Ekman, ed. Throughout: Art and Culture Emerging With Ubiquitous
Computing. MIT Press, 2012, pp. 223–236.
Whitelaw, Mitchell. 2004. Metacreation: art and artificial life. Cambridge, MA: The
MIT Press.
Winograd, Terry. and Flores, F. 1986. Understanding computers and cognition: A
new foundation for design. Bristol: Intellect Books.
Witzgall, Susanne. 2016. “Overlapping Waves and New Knowledge: Difference,
Diffraction, and the Dialog between Art and Science." Pp.141-152. In
Recomposing Art and Science: artists-in-labs.
edited by Jill Scott and Hediger, I. Walter de Gruyter GmbH & Co KG.
Woodward, Susan L., 2009. Introduction to Biomes. Santa Barbara, CA: Greenwood
Press.
Wright, Rewa. 2013. “Exploring the responsive site: Ko maungawhau ki runga.”
Cleland, K., Fisher, L. & Harley, R., Proceedings of the 19th International
Symposium on Electronic Art, ISEA2013. Sydney: Australia. Retrieved from
http://hdl.handle.net/2123/9700
155

Wright, Rewa. 2014. “From the bleeding edge of the network: Augmented reality and
the software assemblage.” Pp 185-193. In Post Screen: Device, Medium and
Concept, edited by Helena Ferreira and Ana Vicente. Lisbon, Portugal:
CIEBA-FBAUL.
Wright, Rewa. 2015. “Mobile augmented reality art and the politics of re-assembly.”
Proceedings of the 21st International Symposium on Electronic Art.
Vancouver, B.C: ISEA International.
Wright, Rewa. 2016. Tactile Light. Mixed media software assemblage
https://www.youtube.com/watch?v=KRd2kBTRkYA. See Appendix 1 for
video documentation available on accompanying USB key.
Wright, Rewa. 2016a. “Augmented reality as experimental art practice: from
information overlay to software assemblage.” Proceedings of the 22nd
International Symposium on Electronic Art. Hong Kong, China: ISEA
International.
Wright, Rewa. 2016b. “Augmented Virtuality: Remixing the Human-Art-Machine.”
Pp 158-166. In Post Screen: Intermittence + Interference, edited by Helena
Ferreira and Ana Vicente. Edicoes Universitarias Lusofonas, Lisbon.
Wright, Rewa. 2017c. Tactile Sound. Mixed media software assemblage.
https://www.youtube.com/watch?v=alxwMb4KQSQ See Appendix 1 for video
documentation available on accompanying USB key.
Wright, Rewa. 2017d. the Wild Versions (1-4) Mixed media software assemblage.
Performance recorded at A.H Reed Park, Whangarei, Aotearoa-New
Zealand.
https://www.youtube.com/edit?o=U&ar=1&video_id=nGT0yulyXpk.
See Appendix 1 for video documentation available on accompanying USB
key.
Wright, Rewa. 2018a. “Post-human Narrativity and Expressive Sites: Mobile ARt as
Software Assemblage.” Pp. 357-369. In Augmented Reality Art, edited by
Valdimir Geroimenko. Switzerland: Springer International Publishing.
Wright, Rewa. 2018b. "Interface Is the Place: Augmented Reality and the
Phenomena of Smartphone–Spacetime". Pp. 117-125. In Mobile Story Making
in an Age of Smartphones, edited by Max Schleser and Marsha Berry.
Switzerland: Palgrave Pivot.
156

Wright, Rewa. 2018c. Tactile Signal: Agave Relay. Mixed media software
assemblage. Performance recorded at the Black Box, UNSW Art & Design,
Sydney, May 2018.
https://www.youtube.com/edit?o=U&ar=1&video_id=piIenCBZzGU.
See Appendix 1 for video documentation available on accompanying USB key.
Wright, Rewa. 2018d. Contact Zone. Mixed media software assemblage. Exhibition
dates are 19-23 November 2018, at the Black Box, University of New South
Wales, Faculty of Art and Design, Greens Road, Paddington, Sydney. Video
documentation of the rehearsal for this exhibition:
https://youtu.be/7OvRrFnxUes
Zhang, M., Zhang, Z., Chang, Y., Aziz, E.S., Esche, S. and Chassapis, C., 2018.
"Recent Developments in Game-Based Virtual Reality Educational
Laboratories Using the Microsoft Kinect." Pp.138-159. In International
Journal of Emerging Technologies in Learning iJET.(131)
Zweifel, R. and Zeugin, F. 2008. "Ultrasonic acoustic emissions in drought-stressed
trees – more than signals from cavitation?" Pp. 1070–1079. In New Phytol
(179). Retrieved 4 June 2017.
https://www.ncbi.nlm.nih.gov/pubmed/18540974

Editorial note: This dissertation has been referenced using the American Sociological
Association Style (ASA).
157

Appendix 1

Supplementary material: video documentation of software assemblages

Accompanying this research document, is a USB key containing video


documentation of the following performances, itemised in folders as numbered
below. Additionally, if online viewing is preferred, video is available at the URL listed
beside the entry:

Folder 1. Wright, Rewa. 2016. Tactile Light. Mixed media software assemblage.
Performance recorded at the artist's studio.
https://www.youtube.com/watch?v=KRd2kBTRkYA.

Folder 2. Wright, Rewa. 2017c. Tactile Sound. Mixed media software assemblage.
Performance recorded at the artist's studio.
https://www.youtube.com/watch?v=alxwMb4KQSQ

Folder 3. Wright, Rewa. 2017d. the Wild Versions (1-4) Mixed media software
assemblage. Performance recorded at A.H Reed Park, Whangarei, Aotearoa-
New Zealand.
https://www.youtube.com/edit?o=U&ar=1&video_id=nGT0yulyXpk.

Folder 4. Wright, Rewa. 2018c. Tactile Signal: Agave Relay. Mixed media software
assemblage. Performance recorded at the Black Box, UNSW Art & Design,
Sydney, May 2018.
https://www.youtube.com/edit?o=U&ar=1&video_id=piIenCBZzGU.

Folder 5. Wright, Rewa. 2018d. Contact Zone. Mixed media software assemblage.
Exhibition dates are 19-23 November 2018, at the Black Box, University of New
South Wales, Faculty of Art and Design, Greens Road, Paddington, Sydney.
Video documentation of the rehearsal for this exhibition:
https://youtu.be/7OvRrFnxUes
158
159
160
161
162
163

You might also like