You are on page 1of 10

Dance

Jockey: Playing Music by Dancing at TEDxLuanda


Yago de Quay August 25, 2012

Abstract
This paper reports on Dance Jockeys performance at the TEDxLuanda conference. It examines the methodology behind the software, hardware, music and choreography. Describes the social context that was crucial to the development of the hardware and software in interactive music performances. Concludes with lessons learned from the performance. Keywords: user innovation; interactive music, interactive dance, interactive projection, Kinect camera, motion capture

1 Introduction
Researchers and hobbyists alike are innovating in new fields which they might not be considered experts. The availability of affordable sensors together with willingness to share software, ideas and methods are fueling the development of home-brewed entertainment systems that empower us with new ways of interact with people and things. Dance Jockey is an example of a system that enables music and visuals to be generated with body movements. Using cameras and sensors the body of a dancer is turned into a living synthesizer. This paper starts by describing the context in which Dance Jockey was created. The works cited explain what factors contributed to the development of its software and hardware and well as the motivations behind innovative works. In section 3 we will see examples of artists and researchers who have taken it upon themselves to innovate the field of music composition and performance. Section 4 will provide insight into how movement was couple to interactive media, and sound design techniques. The last section will review the experiences gathered from the rehearsals and performance. In this paper, Dance Jockey refers to an interactive music/dance performance presented at TEDxLuanda on 26 of May, 2012. A video of the performance can be found here: http://youtu.be/uYjdLVAEQzY

2 User Innovation
End users may choose to innovate when a specific technology does not fit their needs. (Hippel 2005) cites a study illustrating the frequency with which user firms and individual consumers develop or modify products. Findings show that in outdoor consumer products, mountain biking equipment and extreme sporting equipment, the percentages of users developing the product for own use are 9.8%, 19.2%, and 1

37.8% respectively. He claims that innovation is being democratized and those who dont innovate can benefit from innovations freely shared by others. Within these end users, (Franke, Hippel et al. 2006) identified a group that they called lead users, who can help develop breakthrough products. Lead users are members of a user population that 1) expect obtaining high benefits from the development of a product, 2) are ahead of the marketplace trend and thus anticipating the needs of many users. The latter characteristic also predicts both user innovation and likelihood of innovation attractiveness. (Shah and Tripsas 2007) argue that the commercialization of innovative products by individual users is more likely to happen in industries where use provides enjoyment instead of purely economic benefits, have relatively low opportunity costs and high variety of demand. The authors claim that prior employment or university ties can provide insight into the application of emerging technologies. Furthermore, users from outside a given field are often in a better position to find innovative solutions because they frame the problem differently. (Jeppesen and Frederiksen 2006) confirm the previous findings in an online community revolving around Propellerheads digital audio production software. In this forum, users could ask and answer questions, tests and redesign many of the software developed by the company. The key finding is that innovative users are likely to be hobbyists in the field in which they innovate. Hobbyists in this context are also more likely to share innovation because they are not in a competition with other users. In fact, 56% of the software modifications were shared in the online community and 33% of users reported building upon the work of other users. The authors conclude that these factors increase the quality of innovationthey make the product more attractive and ahead of market trend. In the field of electronic music, (Rowe 1996) explores the motivations behind the effort to transfer musical concepts to computer programs. He claims that computers open the possibilities for exploring the composition of timbres and rhythms, and the execution of meta-compositional methods for generating musical textures and structures. His interests in this field are making programs more responsive to musical input and better performers. (Winkler 1999) adds that the ability to have complete control over all aspect of musical composition was part of what attracted composers towards computer music. He also points out how most of the early computer music innovation happened in research facilities like Columbia University, Stanford University, and Bell Labs. This suggests that close ties to research and development facilities may increase the likelihood of innovation in this field.

3 Previous Works
Researcher and performer Todd Winkler used David Rokebys Very Nervous System that detects speed and location of dancers to map positional data to instruments and large musical processes (Winkler 1997). Sections in the performance had one of four different types of movement: mechanical and repetitive for percussive instruments; slow and fluid for evolving sounds; practical and technical for live mixing; and theatrical for triggering comical samples. Mappings ranged from natural correlations between low energy movements and sound, to contradictory and unpredictable response. The author described the sounds as mainly percussive, vocal and mechanical. The Palindrome dance company has been sonically and visually augmenting their performances using their own EyeCon motion tracking software (Wechsler 2011). There are two approaches for motion tracking: Using external devices like cameras or body-worn sensors like electrodes. He argues that although motion tracking technology has the potential to increase interactivity in performing arts, its the techniques used for coupling captured data to media that will ultimately validate the interactivity. The artist Imogen Heap together with researcher Tom Mitchell develop Data Gloves, a pair of gloves worn by the artist as an interface to a live musical composition system which is entirely controlled by gestures. The gloves track orientation and finger and upper body motion. Examples of interaction include high-pass filters mapped to hand distance, playing air drum, manipulating volume, reverb, and panning (Shepherd 2011). In another example using hand-held controllers, Onyx Ashanti improvises jazz music with a sensor that converts his finger, hand and breath movements into control data (Ashanti 2012). The musical interaction with the sensor is similar to that with a saxophone. During a performance, a light show illustrates to the audience which part of the arrangement is being played. (Guedes 2007) explores how similarities between musical rhythm and rhythm in dance can bring new possibilities to computer-mediated interactions between dancers and musicians. Using a set of MAX/MSP algorithms called m-objects, his various installations and performances usually feature a dancer controlling the tempo of the music with his/her movements. Chris Vik developed and performs with a software called Kinectar that uses data from Microsofts Kinect camera (Microsoft 2011) to map body limbs to MIDI events (Vik 2012). The software is designed to facilitate melodic and harmonic interaction with digital audio workstations. In an example using biofeedback, Pamela Z developed a system allowing interplay between her voice and a gesture-based MIDI controller called BodySynth that translates electrical signals generated by muscle contractions into MIDI commands (Zone 2011). The New York dance company Troika Ranch has been known to use a 3

wireless suit of up to eight bend sensors to manipulate visual and sonic elements (Ranch 2011).

4 Methodology
Using cheap video game controllers, Dance Jockey extracts dance features like poses and gestures are interpreted and linked to sound generating software. Contrary to most concerts featuring electronic music, in there is no one working behind the computer. All the communication done between the performers and the computer are through gestures, and most of the times the only feedback is sound.

4.1 Motion Capture


Originally, Dance Jockey was performed with the expensive and advanced Xsens MVN motion capture suit normally used for animating computer generated characters in films. However due to the growth of free, open source software and the availability of commercial motion capture, simpler and more flexible solutions were founds. Because cheap motion capture software and hardware can be hacked easily users have the freedom to customize. Two motion capture technologies were used in this performance, Microsofts Kinect camera and Nitendos Wii Remote. The first is a special camera with depth perception and limb tracking that comes with video game console Xbox 360. Software for connecting the Kinect to computers is shared freely by various developers, which contributes its popularity among media artists. Recognizing these unexpected implementations, Microsoft released a video that became the most viewed video about Kinect on youtube.com. Among the top hits is also a video showing the 12 best Kinect hacks. The second motion capture technology is a handheld sensor remote controller for the Wii video game console. It can sense acceleration among three axes and infrared light. Again, one of the top 10 most viewed videos of the Technology, Entertainment and Design (TED) conferences youtube.com channel is one about Wii Remote hacks. Even though the Kinect and Wii Remote were originally designed as controllers for video game consoles, software developed by hobbyists enable one to connect these controllers to computers. The Kinect plugs in to the computer through the USB port. The application Synapse1 then grabs the data captured by the Kinect and sends it to any application that can receive Open Sound Control protocol. The Wii Remote links to the computer by the Bluetooth wireless network. The application Osculator2 converts data received from the controller and converts it to various formats that can be used by other applications. The Kinect was placed at the center of the stage, 3 meters away from the performer. This gave an area of up to 2 x 2 meters for the performer to move about without
1 http://synapsekinect.tumblr.com/ developed by Ryan Challinor 2 http://www.osculator.net/

getting out of the cameras field of vision. The Wii Remote was held by one of the performers during the talk and in the beginning of the concert.

4.2 Control Unit


The control unit was a hub where all the information from sensors was aggregated, transformed to commands, and then transmitted to different applications. It was the only software developed from scratch, using the visual programming language Max/MSP. Its architecture was based on the finite-state machine model that is comprised of a number of states, each state contains instructions on how to map incoming motion capture data to audio commands, and conditions to transfer to the next state. During the performance the control unit passes sequentially through a list of different states, one at a time. The order and mapping of the states are essentially the music composition. Dance Jockey is interested in exploring the possible connections between movement and sound. Therefore the goal of the control unit was not to perfect one specific mapping but to quickly experiment with different mappings. To achieve this, each state, together with its mappings, was encapsulated in a module. Module could be copy/pasted from one performance to another without loosing any information. Since compatibility with different motion capture devices and software was important for the interaction, data was transferred in Open Sound Control format through the User Datagram Protocol (UDP). Figure 1 shows the mapping inside Luz module. Kinect data is filtered to get the left hand height position relative to the waist. Minimum/maximum bounds scale the input to a 0.0 - 1.0 range. When the hand gets below 0.9, an instruction to play a sound clip of a light bulb being switched on is sent to the music software. Immediately the control unit goes to the next scene called pitch glitch as can be seen on Error! Reference ource not found..

Figure 1 - Inside one of the scenes

4.3 Music
The performance consisted of a talk and a concert. During the talk, sound effects were triggered and manipulated with gestures captures by the Wii Remote or Kinect. The sounds reinforced the actions happening on stage so the audience would understand that the lecturers movements were causing the sound. The concert played two songs. This first song started with the performers passing a sonic object to each other. Then the object was thrown to the crowd and started a beat. Any registered upper body movement performed by the dancer would unmute the beat and the visuals. All the mappings between sound, visuals and movement 5

were obvious to the observer. After the dancer performed a handstand the next song was introduced. The second song mixed live beat boxing by the musician with the synthesizer sounds produced by the dancer. The musician created a looping beat by stacking layers of vocal percussion and melodies. In the A section of the song the dancer locked into one of the five positions that triggered a unique sustaining chord. The choreography and chord progression were composed simultaneously. Positions were recognized using the relative height, width, and depth distances from each hand to the chest. Chords were quantized to the beat to keep them on time. Furthermore, relative hand-to- chest height opened or closed a filter applied to the chords. The left hand affected the left channel while the right hand affected the right channel. The musician and the dancer played simultaneously.


Figure 2 - All the states arranged chronologically

In the B section the musician muted or unmuted the previously recoded loops while the dancer played a synthesizer with her hands. Left hand height raised the volume, width reduced a wobble effect; right hand height closed a high pass filter, depth reduced the reverb. The ending section featured a loop that slowed down as her head was lowered. All audio was produced in digital audio workstation called Ableton Live 8. This software is robust for live performances and can incorporate Max/MSP patches used in the control unit. A series of models were developed for Ableton Live that could receive a wide variety of instructions from the Control Unit thru Open Sound Control. These instructions could be mapped to any musical parameter in Ableton Live. The musician controlled loops and managed sections with MIDI keyboard. 6

4.4 Choreography

The final choreography was a consensus between the artistic director Laura Ferro3, dancer Anaisa Lopes, and I, the musician. Decisions about what movements to use had not only to be visually attractive but also sonically pleasing. Moreover, the relationship between a gesture and its sonic consequence has to be demonstrated to the audience so they would understand the interaction. Every day I would bring a sound to rehearsals, map the interactive parameters to Anaisas limbs, and then Laura would create a choreography while I listened to the result. Wed take notes on what sounds/movements worked and Id go home to improve the system or add new sounds. Slowly building upon previously accepted sounds and movements we etched out the whole concert.


Figure 3 - MIDI Keyboard used by the musician with written commands

4.5 Projection
Other than the slides and animations used to support the talk, the concert featured video clips that responded to the movement of the dancer. Instead of computer- generated images, we opted for manipulation of pre-recorded video sequences. The footage was chosen from a pool of high quality, artistic clips filmed by Patricia Vidal Delgado4. They have an organic feel that is not easily found in computer-generated images. It was also faster than having to create new graphics so more time could be spent deciding how they would be manipulated. The first computer running the control unit would send instructions and information thru the Ethernet and Open


3 www.lauraferro.com 4 www.pvdelgado.com

Sound Control to a second computer processing the visuals. Rodrigo Guedes5 designed how the video was manipulated. The concert featured a different video clip in each section and motion capture data used to manipulate audio was also used to manipulate the video. In the first part of the concert the projection depicted a ball that transformed into a beam of light when movement was detected with the Kinect or Wii Remote. The original footage was of the moon at night after adding some filters. Then a series of clips were shown when movement was detected. These clips captured the process of disintegrating a photograph by submerging it in a special chemical. In the A section with the beat boxing and the dancer performing at the same time, the hand height of the dancer made the video pan left to right. In B following section with the dancers hand height stretched the video. In the last section, head height decreased the opacity of the video. Thus in the end her head was down and the screen was black.

5 Results
Different motion capture techniques will result in different types of movement data. In the end its a question of finding out how to work with a specific technology. In this case the Kinect was able to provide with low latency, detailed information about the position of all the limbs on stage. However the Kinect is a camera so the motion capture process is invisible to the audience. The talk prior to the performance was crucial in making clear the relationship between movement and sound. This also meant that often the movements had to be clear and didactic. Also, the Kinect has a small capture area so the dancer couldnt move about with her typical freedom. The dancer saw this as a limitation. The control unit offered a flexible way to handle and distribute data from the Kinect. On the fly modifications to the control unit could be done without worrying about side-effects, which resulted in a small gap of execution. The finite-state machine model for scheduling the mappings proved to be a robust performance technique for linear composition. However, in some occasions, the performer would trigger the next state accidentally and there would be no way to go back to the previous stage without going to the computer. Not having someone behind a computer on stage helped give the performance a more magical feel. It also made the performers focus their attention on their own movements not the screen. Dance Jockey goes beyond the traditional processes of producing sounds through instruments like an acoustic instrument or a DJ deck. To the audience the interaction between movement and sound is unquestionable; its innovative synergy between dance and music that they appreciate. Having the projection pointed on the dancer provided some unexpected results. Initially we thought the shadow of the dancer was a problem and considered using rear-projection but the space didnt allow it. However, many of the audience
5 www.vimeo.com/visiophone

commented that that liked the shadow on the screen. Furthermore the projection added textures to the dancers skin.


Figure 4 - Dancer with the projection on the background

6 Conclusion
This paper explained the inner workings of a performance called Dance Jockey where electronic sounds are controlled by the movements of a dance. It argues that the underlying context of these types of non-professional media work is user innovation. The goal of the performance was not to sonify movement, but dancefy music. Therefore musical coherence and diversity was the top priority. Using open-source software, proprietary software and video game controllers, Dance Jockey harnessed motion capture data from different sources and dynamically mapped it to interactive media. The short 8 minute concert at TEDxLuanda represented the first time Dance Jockey combined live sounds with electronic sounds at the same time on stage. 9

Bibliography

Ashanti, O. (2012). "Onyx Ashanti." Retrieved June, 2012, from http://www.onyx-ashanti.com. Franke, N., E. v. Hippel, et al. (2006). "Finding commercially attractive user innovations: A test of lead user theory." Journal of Product Innovation Management 23: 301-315. Guedes, C. (2007). "Translating Dance Movement Into Musical Rhythm In Real Time: New Possibilities For Computer-Mediated Collaboration In Interactive Dance Performance." The International Computer Music Conference, Copenhagen, Denmark. Hippel, E. v. (2005). "Democratizing Innovation." Jeppesen, L. B. and L. Frederiksen (2006). "Why Do Users Contribute to Firm- Hosted User Communities? The Case of Computer-Controlled Music Instruments." Organization Science 17(1): 45-63. Microsoft. (2011). "Kinect Games." Retrieved Jan 9, 2011, from http://www.kgames.org/. Ranch, T. (2011). "Troika Ranch " Retrieved April, 2011, from http://www.troikaranch.org. Rowe, R. (1996). "Incrementally improving interactive music systems." Contemporary Music Review 13(2): 47 - 62. Shah, S. K. and M. Tripsas (2007). "The accidental entrepreneur: The emergent and collective process of user entrepreneurship." Strategic Entrepreneurship Journal 1: 123140. Shepherd, I. (2011). "Imogen Heap Data Gloves video @ Wired Future Of Music 20 July 2011." Retrieved June, 2012, from http://productionadvice.co.uk/imogen-heap-data-gloves-video/. Vik, C. (2012, 2012). "Kinectar." Retrieved June, 2012, from http://www.kinectar.org/. Wechsler, R. (2011). "Palindrome Intermedia Performance Group." Retrieved March 10, 2011, from http://www.palindrome.de/. Winkler, T. (1997). "Creating Inteactive Dance with the Very Nervous System." Connecticut College Symposium on Arts and Technology. Winkler, T. (1999). "Composing interactive music: techniques and ideas using Max." Cambridge, Mass., MIT Press. Zone, S. (2011). "BodySynth." Retrieved April, 2011, from http://www.synthzone.com/bsynth.html.

10

You might also like