You are on page 1of 12

Spartin 1

Lark Spartin

Prof. Aleksandra Dulic

MDST 330: Immersive Installation

5 April 2020

Gestures: A Computational Drawing Machine

Gestures is an immersive installation, combining the Processing coding platform with

Microsoft Xbox Kinect to track human movement, resulting in a computational drawing machine.

The project was inspired by a desire to explore movement as an art form, where the human body

functions as an expressive brush in a digital form. Gestures explores the intersections between the

physical and virtual body, with influences from live performance and early abstract animation,

attempting to bring the spectator into the artwork as a performer and creator. In Jim Campbell’s

‘Delusions of Dialogue: Control and Choice in Interactive Art’, he states that because the viewer’s

side of the interaction is not written, the viewer can “respond in a more open way” (133). This

shift of thinking of the computer not just as a recall system where a user enters commands, but

more as a collaborator in design process, is what helped to create Gestures, beginning by taking

inspiration from physical media like theatre, drawing and painting.

Live performance and interactive media are both ephemeral in nature and exist mostly in

memory. David Saltz, in his paper, The Art of Interaction: Interactivity, Performativity, and

Computers, draws a connection between musicians, playwrights, and digital media artists, stating

“they do not directly produce tangible stimuli that any given audience will experience, but instead

produce what might loosely be described as a blueprint for performances” (118). Interactive media

can produce a unique experience, like a specific performance of a play or musical composition.
Spartin 2

However, instead of using a human performer to interpret the work, the viewer is controlling the

performance directly, meaning they gain the improvisation that comes along with live performance

but are given a deeper connection because they are creating the piece, versus watching it on a

stage. Gestures is similarly performative in the sense that participation and interaction become

what the piece consists of. In Studio Azzurro’s manifesto, Confidential Report on an Interactive

Experience by Paolo Rosa, it is stated viewers are participating in more than just a predefined

narrative, but can “explore and realize in time, in space, in matter, the potential of a work/event”

(6). In Live Media: Interactive Technology and Theatre, Davis Saltz speaks about the character of

Ariel in a contemporary version of William Shakespeare’s The Tempest, where the actress had

sensors attached to her body, including her head, waist and arms. These sensors transmitted

information about her movements to a computer and projected animations of her. Voice

recognition software was also used to sync the animation’s lips to the actress’s voice. Other

examples of technology integrated with theatre include Nayatt School by The Wooster Group in

1978, which incorporated film into a live performance and played back a previous version of the

work, and To You, The Birdie! (Phedre), a stage play featuring Willem Dafoe as Theseus. Theseus

is live on stage, and simultaneously, there is a pre-recorded video of him that is shown through a

monitor. This splitting between stage and screen time and playing with the idea of being physically

present somewhere helped me build a foundation for playing with a physical and virtual version

of the body. Learning about how performance intersects with interactive media and understanding

how a physical body could move within the space was an important precursor for the abstracted

virtual display of movement in Gestures.

After researching the similarities between performance and interactive art, I became

interested in the varieties of ways that performance could be depicted. It was clear to me that
Spartin 3

computers have the potential to do more than regurgitate raw data and can interpret input of many

different data types, so the next logical step was to learn how to display that input. Dichotomies

exist between the languages of expressive and gestural painting and drawing, and the world of

computational technology and discrete logic. Digital media is usually not thought to be as

expressive as physical media. However, Scott Snibbe and Golan Levin, in their article, Interactive

Dynamic Abstraction, contemplate possible limits of physical media, stating there “is one essential

element missing from these static paintings that is intrinsic to music: time” (2). Although I was not

necessarily exploring links between visual art and music, I noted that interactive technology is also

time-based media. Gestures was an important experiment to see if a greater sense of intrigue could

be expressed by displaying human movement over time. Furthermore, I wanted to emulate the

expressivity of physical media in the visual display on screen. Snibbe and Levin also speak on this

point of infusing computer programs with temporal painting expressivity, saying they wanted to

“design visual instruments that could possess these qualities of simplicity, intuitiveness and

possibility” (6). They began by expl`oring pioneers like Pythagoras and Kandinsky. Pythagoras

looked at similarities of tone and light, and explored ‘perceptual phenomena’ and aesthetics.

Kandinsky used his synesthesia to explore links between sound and color. Snibbe and Levin had

a large focus on the basic principles of drawing, quoting Paul Klee’s ideas of ‘a line is a dot that

went for a walk’, and ‘drawing is taking a line for a walk.’ Abstracting these primitive shapes like

points, lines, ellipses and rectangles was a good first step in building Gestures. Motion Sketch, a

digital animation that uses mouse and keyboard movements to display shapes and color on screen,

and Motion Phone, a program that allowed many machines to collaborate through network

connection to draw on one canvas, were early experiments by Snibbe and Levin. The authors did

concede however that at the time, “most computer programs were focused on the mathematical
Spartin 4

processes rather than the aesthetic (4), and they attempted to fix this in numerous ways. They used

pre-Cartesian geometry, applying direction, inertia, velocity, orientation, curvature, and coded

particle systems to emulate nature in computer graphics. They would also utilize processes like

taking the numerical derivative of successive screen positions, and used squash and stretch

animation techniques to distort shapes in the direction of motion. Using a program as an instrument

to inform movement, rather than just a machine to carry out simple tasks, helped me to view

computation in a new way. Snibbe and Levin finally took the approach of creating

“phenomenological user interfaces” (9). Using non-representational forms, they attempted to

showcase the subjective nature of experience, the knowing of oneself, and how perception is linked

to consciousness. These studies branch off of phenomenology, a field of study by Edmund Husserl.

Perception, as well as style and personality of an individual is so varied, and I was interested in

how something as intangible as perception or emotion could be linked to interactive media, and

how these feelings could be potentially showcased in Gestures.

Abstract animation is another early field that was important in my study. Len Lye drew

and scratched on celluloid film to create the 1958 animation Free Radicals, which imitated the

structure of music, making his process of creation as telling to his inner thoughts as the product

itself (Snibbe 2). However, early abstract animations had no live performer, making diverse

expressions difficult. The color organ was invented, a device that incorporated light with a live

performer. The first color organ, The Ocular Harpsichord, was made by Father Loise-Bertrand

Castel in 1734. A harpsichord was placed behind colored glass windows, using candles as its light

source. Pulleys from the keys of the keyboard created small panes of color as a musical

composition was played. In the 20th century, technology enhanced the pure light performances, as

well as using mirrors, lenses, and colored filters for projection. Thomas Wilfred made his version
Spartin 5

of a color organ, Clavilux, in 1930, and Oskar Fischinger, one of the first to create video that

accompanied music, also made the Lumigraph in 1952, consisting of a cloth sheet that intersected

beams of light controlled by foot pedals. These color organs were functioning equipment that

carried out a specific task, all while still mastering an ethereal look. This pairing of technological

capability with the potential to still show character in an abstract output, was exactly the challenge

I was willing to explore in Gestures. I planned to provide a natural quality in the installation by

employing the math and physics available in computer code. I decided to use Xbox Kinect as my

computer vision input, and knew there were many platforms I could utilize in tandem to abstract

this data. Usually, digital representations leave little room for the intuitive abstraction I have

spoken about thus far, so it was important to find a coding platform that had a focus on using code

in a poetic fashion. Zachary Lieberman, a founder of creative C++ library openFrameworks and

collaborator with Golan Levin, speaks about computational poetics in his talk, What could the

Creative Career of the Future Look Like?, at Adobe’s 99U Conference. Lieberman found his

beginnings in printmaking and painting, and soon after started to use code as a visual language to

evoke emotion and feeling, using code to write his version of poems. Processing was my platform

of choice because it was created to use code within the context of visual arts, it has very descriptive

documentation, and uses Java syntax, making it easy to learn. The fact that coding languages have

shifted from linguistic encoding with no display like TEI encoding, to languages created for visual

art like Processing or Max MSP, invites users to rethink the meaning of code.

Golan Levin’s manifesto, Computer Vision for Artists and Designers: Pedagogic Tools

and Techniques for Novice Programmers, reflected on other projects that incorporated a more

technical introspection into computer vision, such as Myron Krueger's Videoplace (1965-1975).

According to Levin, Krueger believed “that the entire human body ought to have a role in our
Spartin 6

interactions with computers” (2). Videoplace digitized participants figures, analyzing their shape

and motions, and in turn, drew animations on screen related to their actions. Users could also

choose to paint with their fingers or bodies. Messa di Voce is another example, created by Levin

and Lieberman. It was a projection-based, augmented reality project combined with speech

analysis, where a vision algorithm tracks the performers heads, analyzes audio coming from the

microphone, and displays visualizations in reaction to what is being heard and seen.

Gestures also took influences from Camille Utterback’s piece Entangled, where she

projected a variety of human controlled brush textures onto three suspended scrims. Participants

can interact with the work on both sides of the scrims. Utterback states that Entangled “reminds

us of our embodied relationships, requiring participants to literally view each other through a

screen” (1). Although Gestures is not doubled-sided, Entangled did provide ideas of turning

computer input into something physical, visual and artistic, and bring viewers back into connecting

with the digital medium: a medium that has been characterized as being somewhat disconnecting,

separating relationships and removing people from the environment around them. Gestures was

an experiment in allowing viewers to be present with themselves, using their own movement to

immediately transform a work right before their eyes.

Golan Levin’s Elementary Computer Vision techniques became a good beginning in

tangibly trying to express Gestures. The three techniques I found to be most useful in creating my

program were frame differencing, which checks the difference between two frames to detect

change in movement, background subtraction, which calculates the difference of pixels according

to their difference from a known background scene; and brightness thresholding, which uses

differences in luminosity between foreground and background. From here, I began to explore

computer vision more specifically within the context of Processing documentation, starting with
Spartin 7

simple object tracking, and moving into representing point clouds, raw depth data and the depth

image, color tracking, average point hand tracking, closest or highest point tracking, and

thresholds.

I started by displaying raw depth data and testing the data against a condition to see if it

fell within a certain distance threshold. If movement fell within this threshold, the Kinect would

pick up movement. A threshold was used to distinguish human movement from background

movement or noise. Within that threshold, the program calculates the highest and lowest point and

tracks those, updating with every frame. Another conditional loop checked if movement occurred

within a minimum threshold number, or in other words, a pixel was recognized as being the closest

point to the Kinect, the total number of pixels on screen are counted, as well as a sum of the X-

axis and Y-axis pixels. The sum of the X-axis and Y-axis pixels is then divided by the total, in

order to get an average center point that tracks the closest point on the screen. From these points

of movement, particle systems were displayed, consisting of different shape properties. The color

of each of the particle pixels was mapped to the color depth data. The color at that X/Y position

was sampled and would change as the participant moved across the screen, depending on the

distance. There were four particle systems displayed, each with different characteristics. I used

Object Oriented Programming to create particle constructors and used the principles of

Polymorphism and Inheritance to extend different attributes to each constructor. Using the Super()

function allowed me to extend the main characteristics of the original Particle class I created, and

create unique features for following particles, so I could create different shapes on screen. This

was more efficient, making it so I didn’t have to create each Particle class from scratch. Each

particle used the same Particle System class functionality. The Particle System class stays the same

for each different particle, and keeps track of the particles lifespan, removing it after a certain
Spartin 8

amount of time. Each particle is updated with an acceleration, velocity, and position, to give the

effect of streaming down the screen, and to play off what I learned from Snibbe and Levin about

emulating real forces into code to give it the feel of a temporal painting. In another iteration of this

project, I would be interested in exploring Blob Detection to obtain regions of motion, such as

both hands, the head, and maybe the middle of the torso. Using this would be a more seamless way

to detect specific regions of activity, and give better recognition of these areas, as you can create

code to detect areas of a certain color or distance. I would have also liked to explore the Skeleton

video data to try to accomplish this and map the different particle shapes to specific joints.

Reflecting on the installation process, testing of equipment and experimentation with materials

before setting up and in real time became essential, and I will carry these lessons into my next

installations. Another further iteration of Gestures could include audio, to create an interactive

visual music piece using movement to trigger sound in tandem with visuals.

The physical portion of Gestures was just as important in bringing together the concepts

of expression spoken about so far. As humans, we are intrinsically linked to space, allowing it to

inform feeling and memory. I wanted to free the computer-generated image I created from the

constraints of a computer monitor. I was trying to create textural materiality in the code itself, so

the physical installation needed to serve as an extension of this. I used tulle to help create an

ephemeral, invisible effect, which helps to represent an intertwining of the physical and virtual

body and experience. Toni Dove, author of Theater without Actors-Immersion and Response in

Installation, talks about scrims being used to break out of the stereotypical screen mould, saying

“projection-scrim sculpture creates the illusion of 3D characters from slides and video…projecting

moving animations onto structured scrims to create a holograph-like illusion” (4). Dove says the

viewer can become more of a participant this way, as compared to film, where a passive viewer
Spartin 9

just looks at flat space. The tulle used in Gestures helps give the movement a reverse holographic

effect. Paolo Rosa in Studio Azzurro’s Confidential Report on an Interactive Experience agrees

that using the right materials in installation is important when they say that projecting images onto

heavily textured materials “exalts the materiality which is so characteristic of the art of painting;

the warmth of this medium and the pleasure associated with a technique that has played such a

large role in the history of art is in some way recovered (8)”. Being able to emulate characteristics

of painting was important when looking at the structure of the ‘paintbrush’ in Gestures, but also,

the canvas. Rosa also meditated on the importance of natural interface, talking about how

relationships between the computer and participant are very tied to technological items such as the

mouse and keyboard and it’s better “if our sensitive environments are without even so much as the

shadow of an electric wire. This allows us to get a clearer view, not of technology, but of its effects,

and allows for a more effective relationship between the immaterial world of images and sounds

and the material world of the objects and spaces with which the work is completed” (7). Being

able to touch something physically and interact in three-dimensional space helps involve multiple

senses and brings technology into reality. In creating Gestures, I wanted to incorporate an element

of play, helping others to learn through an imaginary adventure, and encouraging freedom and

exploration by minimizing technology and wires, and maximizing being present with the work

(Azzurro 9). Rosa explains that “we are at the end of the era initiated by Duchamp and his

readymade works”, and no longer will everyday objects be taken out of the real world and put in

art galleries, but actually the opposite is happening (11). Gestures brings the work out of the art

gallery and towards the participants world.

Interactive media helps to build a symbolic system, that encourages a new point of view

from which to “observe reality…that [participants] have helped to change” (Rosa, 11). Toni Dove
Spartin 10

describes interactive media narrative as “not bound by the traditional ideas of plot or story, but

nonetheless has an engine that moves it through time” (2). Gestures is a multi-layered project,

taking cues from live performance, expressive painting and abstract animation, that helped me see

how perspective and human engagement can be used as an instrument. It’s interactive medium

attempted to infuse expression into a computational interface, to give its viewers the role of

creators, and to allow them to be able to inform design choices themselves while exploring and

creating in real time. This immersive installation functioned as a learning opportunity and

experiment, trying to create affect in its spectators and give them an experience that they could not

otherwise get through traditional art. The manifestation of code and physical installation in

Gestures is proof that interactive digital media can be immersive, and is an expressive realm that

should be explored for years to come.


Spartin 11

Works Cited

Campbell, Jim. “Delusions of Dialogue: Control and Choice in Interactive Art.” Leonardo, vol.

33, no. 2, 2000, pp. 133–136. JSTOR, www.jstor.org/stable/1576847. Accessed 3 Apr.

2020.

Saltz, David Z. “The Art of Interaction: Interactivity, Performativity, and Computers.” The Journal

of Aesthetics and Art Criticism, vol. 55, no. 2, 1997, pp. 117–127. JSTOR,

www.jstor.org/stable/431258. Accessed 3 Apr. 2020.

Kaye, Nick. “Paolo Rose (Studio Azzurro), Confidential Report on an Interactive Experience.”

Routledge, 2007.

Saltz, David Z. “Live Media: Interactive Technology and Theatre.” Theatre Topics, vol. 11, no. 2,

2001, pp. 107-130.

THE WOOSTER GROUP, thewoostergroup.org/blog/.

Snibbe, Scott Sona, and Golan Levin. “Interactive Dynamic Abstraction.” Proceedings of the First

International Symposium on Non-Photorealistic Animation and Rendering - NPAR 00,

2000, doi:10.1145/340916.340919.

Behance, Inc. “Zach Lieberman: What Could the Creative Career of the Future Look Like?” Adobe

99U, 25 June 2019, 99u.adobe.com/videos/63720/zach-lieberman-what-could-the-

creative-career-of-the-future-look-like.

Levin, Golan. “Computer Vision for Artists and Designers: Pedagogic Tools and Techniques for

Novice Programmers.” Ai & Society, vol. 20, no. 4, 2006, pp. 462-482.

“Entangled.” Camille Utterback, camilleutterback.com/projects/entangled/.


Spartin 12

Dove, Toni. “Theater without Actors: Immersion and Response in Installation.” Leonardo, vol.

27, no. 4, 1994, pp. 281–287. JSTOR, www.jstor.org/stable/1575994. Accessed 3 Apr.

2020.

“Free Radicals, 1958 (Revised 1979).” The Len Lye Foundation - Free Radicals, 1958 (Revised

1979), www.lenlyefoundation.com/films/free-radicals/33/.

“Getting Started with Kinect and Processing.” Getting Started with Kinect and Processing |

Daniel Shiffman, shiffman.net/p5/kinect/.

Howell, Joyce Bernstein. “Element‐and‐Principles Instruction, Perceptual Drawing and Paul

Klees Pedagogical Sketchbook.” International Journal of Art & Design Education, vol.

39, no. 1, 2019, pp. 38–55., doi:10.1111/jade.12225.

You might also like